On May 13, 2025, U.S. Magistrate Judge Ona Wang ordered OpenAI to preserve all ChatGPT output logs indefinitely, rejecting a user's petition to rescind the order due to privacy concerns. This ruling, part of The New York Times v. OpenAI copyright infringement lawsuit, stems from the newspaper's claim that OpenAI used its articles without permission to train ChatGPT, potentially reproducing copyrighted material. The decision, upheld on June 24, 2025, has sparked debate over privacy, transparency, and the legal boundaries of AI. Below is an outline of the ruling's significance across key dimensions.
Erosion of Privacy Expectations: The ruling mandates OpenAI to retain all user conversations, including those in "temporary chat" mode or manually deleted, overriding user expectations of data deletion within 30 days, as per OpenAI's prior policy. This affects millions of users globally, from individuals discussing personal matters to businesses sharing sensitive data via OpenAI's API.
Risk of Identification: Although Judge Wang emphasised that the order pertains to preservation, not disclosure, the retention of detailed chat logs increases the risk of identifying users, especially in personal or sensitive conversations. For example, medical or legal discussions could be traced back to individuals, raising concerns about confidentiality.
Lack of Transparency: User Aidan Hunt discovered the order via an online forum, not through direct notification from OpenAI, highlighting a lack of proactive communication. This has fuelled distrust.
Precedent for Future Cases: The decision sets a precedent for courts to mandate data retention in AI-related litigation, potentially undermining user control over personal data and conflicting with privacy laws like GDPR, as noted by OpenAI's appeal.
Challenge to Data Retention Policies: OpenAI's standard practice of deleting chats within 30 days (unless users opt out) has been disrupted, forcing the company to overhaul its data infrastructure. This creates a "substantial burden," as OpenAI argued, and may push users toward competitors with stronger privacy guarantees.
Call for Enhanced User Controls: The ruling underscores the need for AI companies to offer granular privacy options, such as anonymous modes or clear deletion guarantees, as suggested by TechRadar. Without these, users may treat AI tools like ChatGPT with caution, akin to "a coworker who might be wearing a wire."
Regulatory Scrutiny: The order amplifies calls for stricter AI data governance, as seen in posts on X advocating for decentralized AI platforms like @ritualnet. It may prompt regulators to enforce transparency requirements, such as mandatory notifications when data is preserved for legal purposes.
Strengthening Publisher Claims: The ruling supports The New York Times' argument that ChatGPT logs are essential to prove copyright infringement, as the chatbot may reproduce copyrighted material verbatim. Judge Wang's decision to preserve logs suggests judicial recognition of the need to investigate AI outputs, bolstering the case's progression to trial.
Fair Use Debate: OpenAI's defence hinges on the "fair use" doctrine, arguing that its use of copyrighted material is transformative. However, the preservation order implies that courts are open to examining whether AI outputs infringe on copyrights, potentially reshaping the legal boundaries of AI training data. A 2025 NPR report notes that the case could redefine fair use in AI contexts.
Industry-Wide Ramifications: The lawsuit, consolidated with claims from The New York Daily News and the Center for Investigative Reporting, could set a precedent for how publishers challenge AI companies. A victory for The Times might lead to demands for dataset destruction or hefty fines, as noted by NPR, impacting AI development broadly.
Judicial Perspective: Judge Wang rejected accusations of a "nationwide mass surveillance program," arguing that data preservation for litigation is standard and distinct from law enforcement. Her sharp rebuke, including a "[sic]" in response to user Aidan Hunt's petition, underscores the judiciary's view that privacy concerns are secondary to evidentiary needs in civil cases.
User Backlash: The ruling has sparked panic among users, with X posts like @EFF calling it "badly misguided" and warning of privacy erosion. The disconnect between legal norms and user expectations highlights a gap in public understanding of data retention in litigation.
OpenAI's Response: OpenAI's appeal, led by CEO Sam Altman, frames the order as an "overreach" that violates privacy commitments. Altman's call for "AI privilege" akin to lawyer-client confidentiality reflects a push to protect user data, potentially influencing future legal frameworks.
Chilling Effect on AI Use: The ruling may deter users from engaging with ChatGPT for sensitive tasks, including fears about trade secrets or personal disclosures. This could reduce adoption among businesses and individuals.
Competitive Pressure: The order may drive users to competitors like Anthropic, which face similar lawsuits but may offer stronger privacy protections. This could reshape the AI market, favouring firms with robust data governance.
News Industry Dynamics: The lawsuit reflects broader tensions between media and AI companies, with some publishers like The Associated Press opting for content-sharing deals while others pursue litigation. A 2025 AP News report notes that the case could impact the news industry's revenue model by addressing AI-driven traffic loss.
Transparency Needs: The ruling exposes the need for AI companies to notify users promptly about data retention changes, as OpenAI delayed disclosure for over three weeks. VentureBeat reports that OpenAI's blog post on June 5, 2025, was its first public acknowledgment, prompting calls for mandatory notifications.
Privacy Law Evolution: The case highlights gaps in privacy laws like GDPR, which may not fully address AI-specific data retention. A 2025 Decrypt article suggests that the ruling could spur legislative efforts to balance litigation needs with user rights.
Ethical AI Development: The controversy underscores the need for ethical AI practices, including clear data policies and user consent mechanisms, to rebuild trust.
In conclusion, the federal judge's ruling to uphold OpenAI's preservation of all ChatGPT logs is a pivotal moment in the intersection of AI, privacy, and copyright law. It undermines user privacy expectations, challenges AI governance norms, and strengthens the New York Times' case by prioritising evidentiary needs over user control. The decision could reshape fair use interpretations, influence AI adoption, and prompt regulatory reforms. While OpenAI's appeal and calls for "AI privilege" aim to protect users, the ruling highlights a broader need for transparency and robust privacy frameworks in AI development. As the lawsuit progresses, its outcome will likely have lasting implications for the tech and media industries, redefining the balance between innovation, intellectual property, and individual rights.
"A federal judge rejected a ChatGPT user's petition against her order that OpenAI preserve all ChatGPT chats
The order followed a request by The New York Times as part of its lawsuit against OpenAI and Microsoft
OpenAI plans to continue arguing against the ruling
OpenAI will be holding onto all of your conversations with ChatGPT and possibly sharing them with a lot of lawyers, even the ones you thought you deleted. That's the upshot of an order from the federal judge overseeing a lawsuit brought against OpenAI by The New York Times over copyright infringement. Judge Ona Wang upheld her earlier order to preserve all ChatGPT conversations for evidence after rejecting a motion by ChatGPT user Aidan Hunt, one of several from ChatGPT users asking her to rescind the order over privacy and other concerns.
Judge Wang told OpenAI to "indefinitely" preserve ChatGPT's outputs since the Times pointed out that would be a way to tell if the chatbot has illegally recreated articles without paying the original publishers. But finding those examples means hanging onto every intimate, awkward, or just private communication anyone's had with the chatbot. Though what users write isn't part of the order, it's not hard to imagine working out who was conversing with ChatGPT about what personal topic based on what the AI wrote. In fact, the more personal the discussion, the easier it would probably be to identify the user.
Hunt pointed out that he had no warning that this might happen until he saw a report about the order in an online forum. and is now concerned that his conversations with ChatGPT might be disseminated, including "highly sensitive personal and commercial information." He asked the judge to vacate the order or modify it to leave out especially private content, like conversations conducted in private mode, or when there are medical or legal matters discussed.
According to Hunt, the judge was overstepping her bounds with the order because "this case involves important, novel constitutional questions about the privacy rights incident to artificial intelligence usage – a rapidly developing area of law – and the ability of a magistrate [judge] to institute a nationwide mass surveillance program by means of a discovery order in a civil case."
Judge Wang rejected his request because they aren't related to the copyright issue at hand. She emphasized that it's about preservation, not disclosure, and that it's hardly unique or uncommon for the courts to tell a private company to hold onto certain records for litigation. That's technically correct, but, understandably, an everyday person using ChatGPT might not feel that way.
She also seemed to particularly dislike the mass surveillance accusation, quoting that section of Hunt's petition and slamming it with the legal language equivalent of a diss track. Judge Wang added a "[sic]" to the quote from Hunt's filing and a footnote pointing out that the petition "does not explain how a court's document retention order that directs the preservation, segregation, and retention of certain privately held data by a private company for the limited purposes of litigation is, or could be, a "nationwide mass surveillance program." It is not. The judiciary is not a law enforcement agency."
That 'sic burn' aside, there's still a chance the order will be rescinded or modified after OpenAI goes to court this week to push back against it as part of the larger paperwork battle around the lawsuit.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsors
By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
Deleted but not goneHunt's other concern is that, regardless of how this case goes, OpenAI will now have the ability to retain chats that users believed were deleted and could use them in the future. There are concerns over whether OpenAI will lean into protecting user privacy over legal expedience. OpenAI has so far argued in favor of that privacy and has asked the court for oral arguments to challenge the retention order that will take place this week. The company has said it wants to push back hard on behalf of its users. But in the meantime, your chat logs are in limbo.
Many may have felt that writing into ChatGPT is like talking to a friend who can keep a secret. Perhaps more will now understand that it still acts like a computer program, and the equivalent of your browser history and Google search terms are still in there. At the very least, hopefully, there will be more transparency. Even if it's the courts demanding that AI companies retain sensitive data, users should be notified by the companies. We shouldn't discover it by chance on a web forum.
And if OpenAI really wants to protect its users, it could start offering more granular controls: clear toggles for anonymous mode, stronger deletion guarantees, and alerts when conversations are being preserved for legal reasons. Until then, it might be wise to treat ChatGPT a bit less like a therapist and a bit more like a coworker who might be wearing a wire.