
AI Conversations Are Not Protected by Attorneys and Can Be Used Against You
A federal judge in New York has ruled that conversations with AI tools like Claude are not privileged—and the implications reach far beyond criminal cases.
What Happened
On February 10, 2026, in United States v. Heppner, Judge Jed Rakoff of the Southern District of New York became the first judge to rule that AI-generated documents are neither protected by attorney-client privilege nor work-product doctrine.
Bradley Heppner, a Dallas financial services executive facing securities and wire fraud charges, used Claude to research legal questions during the government's investigation. He fed information from his defense counsel at Quinn Emanuel into the AI, generated 31 documents of prompts and responses, and shared them with his lawyers. When the FBI seized those documents, his attorneys argued they were privileged. Judge Rakoff disagreed—and ordered production.
Why Privilege Failed: Four Reasons
1. AI is not a lawyer
An AI tool has no law license, owes no duty of loyalty, and cannot form an attorney-client relationship. Legally, using Claude for legal research is the same as discussing your case with a friend—not with counsel.
2. Not for the purpose of legal advice
Anthropic's own materials say Claude avoids giving "specific legal advice." The tool explicitly disclaims providing legal services. You can't claim you used it for legal advice when the tool says it doesn't provide it.
3. No reasonable expectation of confidentiality
This finding has the broadest impact. Anthropic's policy states that user prompts and outputs may be disclosed to government authorities and used to train the model. Judge Rakoff found no reasonable expectation of confidentiality.
The same applies to OpenAI. Free and paid consumer plans (Claude Free/Pro/Max, ChatGPT Free/Plus/Pro) typically allow model training on your data. Opting out of training does not remove the platform's rights to disclose data to government or in response to legal process. Only enterprise agreements (ChatGPT Enterprise, Claude for commercial/government use) typically offer contractual confidentiality. A $20/month subscription does not buy privilege.
4. Pre-existing documents stay unprivileged
Heppner created the AI documents before sending them to his lawyers. Sending unprivileged materials to counsel after the fact does not retroactively make them privileged.
Work-Product Protection Fared No Better
Heppner's lawyers admitted he created the documents "of his own volition" and that the legal team did not direct him to run the AI searches. Without attorney direction, work-product protection does not attach. The government noted that if counsel had directed the AI research, the analysis might differ.
The Privilege Waiver Problem
Perhaps the most serious implication: Heppner fed information from his attorneys into Claude. The government argued—and the court agreed—that sharing privileged communications with a third-party AI platform may constitute a waiver of the privilege over the original attorney-client communications. The privilege belongs to the client, but so does the responsibility to maintain it.
Related Trend
In the same court, Judge Oetken recently ruled that 20 million ChatGPT conversation logs are likely subject to compelled production in the OpenAI copyright litigation, finding that users have a "diminished privacy interest" in their AI conversations.
What You Should Do Now
- If you're an attorney: Advise clients explicitly that anything they input into an AI tool may be discoverable and is likely not privileged. Consider adding this to engagement letters and client onboarding.
- If you manage legal risk: Audit your organization's AI usage. Consumer-grade tools with standard terms offer no confidentiality protections. Enterprise agreements with contractual confidentiality may change the analysis.
- If you use AI for legal work: Treat every prompt as a potential disclosure and every output as a potentially discoverable document. The conversational interface creates a dangerous illusion of privacy.
The Bottom Line
AI tools feel private. They feel like talking to an advisor. But unless you have an enterprise agreement with contractual confidentiality protections, you are inputting information into a third-party platform that retains your data and reserves broad rights to disclose it.
United States v. Heppner is the first ruling, not the last. As AI adoption accelerates in the legal profession, expect more courts to grapple with privilege questions. For now, the message from the New York federal court is clear: privilege protects communications with your lawyer, not conversations with your AI.
This applies beyond criminal cases—to civil litigation, workplace investigations, regulatory inquiries, and internal business analysis. Any time an employee uses AI to analyze legal issues, evaluate liability, research complaints, or prepare for litigation, they may be creating discoverable records that adversaries can obtain.