How AI Like ChatGPT is Impacting the Legal System

OpenAI / PR-ADN
As artificial intelligence tools like ChatGPT become increasingly integrated into daily life, legal systems worldwide are beginning to grapple with the challenges these technologies pose, prompting new debates about responsibility, regulation, and the boundaries of automated decision-making.
TL;DR
- Court orders OpenAI to hand over millions of ChatGPT logs.
- AI chatbots lack strict “no-logs” policies like VPNs.
- Users advised to review privacy policies and seek local AI.
A Legal Challenge Raises the Stakes for AI Privacy
A recent judicial order requiring OpenAI to provide 20 million records from its popular chatbot, ChatGPT, has reignited concerns about how artificial intelligence platforms manage personal data. This request—made in the context of a high-profile copyright dispute—shines a spotlight on the staggering volume of information such platforms accumulate and store. The initial figure mentioned, a daunting 120 million logs, hints at the immense scale and potential sensitivity of these datasets.
Different Standards: VPNs vs. AI Chatbots
This case underscores a striking contrast between AI chatbots and another category of digital services: virtual private networks. Most reputable VPN providers follow rigorous “no-logs” policies, promising users that no personal data or identifiers are retained. In practice, this means that, even if compelled by a court order, these companies have no records to hand over—a policy upheld in legal proceedings, such as the Windscribe case in Greece where the absence of logs led to the dismissal of claims.
By comparison, major chatbot providers routinely keep extensive records: not only conversation histories but also technical details like device information and geolocation data.
To clarify, here’s how these two sectors differ:
- VPNs: No identifiable user data is accessible for authorities.
- AI chatbots: User exchanges and metadata are often stored for prolonged periods.
User Implications and Legal Risks
For end users, prolonged retention of chatbot conversations—sometimes until actively deleted—raises troubling uncertainties. Written commitments found in privacy charters don’t always align with what courts can require from companies. The standoff between The New York Times and OpenAI, which resulted in long-term preservation of specific content, illustrates this disconnect between public assurances and operational realities.
Cybersecurity expert Dr. Ilia Kolochenko (ImmuniWeb) recommends heightened caution: anything shared with an AI platform might one day end up as evidence in legal disputes or even criminal investigations.
Towards Better Data Control
Given that avoiding AI entirely is almost impossible in today’s professional world, some alternatives are emerging for those seeking greater privacy. Locally-run models like Lumo (developed by Proton) process data directly on a user’s device rather than transmitting it to remote servers. Although these solutions may not match cloud-based models in performance, they do offer enhanced confidentiality.
Ultimately, experts advise consistently reviewing privacy policies before sharing sensitive information—and combining prudent use of VPNs with preference for local AI applications whenever possible—as effective ways to mitigate the risks inherent in mass data retention by leading AI providers.