On February 10, 2026, a federal judge in New York made a ruling that sent shockwaves through the legal, insurance, and compliance communities. In United States v. Heppner, Judge Jed Rakoff held that documents a defendant created using a public AI chatbot were not protected by attorney-client privilege or the work product doctrine.
The reasoning was straightforward: when you type confidential information into a public AI platform, you've shared it with a third party. And once you've done that, it's no longer confidential. Privilege doesn't apply.
This wasn't a ruling about AI being dangerous or untrustworthy. It was a ruling about architecture — about where your data goes when you use an AI tool, and who can access it.
And that distinction matters for every professional who works with sensitive documents.
What Actually Happened in Heppner
Bradley Heppner was arrested in November 2025 on securities and wire fraud charges. During the search of his home, federal agents seized electronic devices containing roughly thirty-one documents he had generated using a consumer AI platform. After retaining legal counsel, Heppner had used the AI tool to prepare reports outlining potential defense strategies and legal arguments. He later shared those reports with his attorneys.
The defense argued these documents were privileged — protected by both attorney-client privilege and the work product doctrine. The government disagreed and moved for access.
Judge Rakoff sided with the government on both counts.
On the privilege question, the court found that communications with a public AI tool don't satisfy the confidentiality requirement because the platform is a third-party service accessible to anyone. Sending your legal analysis to a chatbot is not the same as sending it to your attorney in a sealed envelope. The information was disclosed to a commercial service provider with no duty of confidentiality equivalent to that of a lawyer.
On the work product question, the court noted that Heppner prepared the documents on his own initiative — his attorneys didn't direct him to use the AI tool. The work product doctrine generally requires that materials be prepared in anticipation of litigation at the direction of or in consultation with counsel.
Why This Matters Beyond the Courtroom
The immediate legal implications are significant, but the broader signal is what matters for professionals working with sensitive data.
The Heppner ruling makes explicit what many privacy advocates have been warning about for years: when you use a cloud-based AI service, your data is no longer exclusively yours. It passes through servers you don't control, is processed by systems you don't own, and may be stored, logged, or used for training in ways governed by terms of service that most users never read.
For attorneys, this means confidential legal strategy typed into a public chatbot can be compelled in court. For insurance professionals, it means policy analysis, claims assessments, and client data processed through cloud AI may not have the confidentiality protections they assumed. For compliance officers, it means regulatory documents analyzed through public AI tools may create exposure they didn't anticipate.
The New York State Bar Association has already raised the central follow-up question: would the court have ruled differently if the defendant had used a closed, enterprise AI system that kept information confidential and inaccessible to the public?
That question points directly at the architectural solution.
The Architecture Problem
The Heppner ruling doesn't say AI tools are inherently incompatible with privilege or confidentiality. It says public AI tools are — because they break the chain of confidentiality.
This is a critical distinction. The problem isn't the AI. The problem is where the AI runs and who has access to the data you feed it.
When you use a cloud-based AI service, your workflow looks like this:
At every step, your data is outside your control. It traverses networks you don't manage, sits on servers you don't own, and is processed by systems with their own data retention and training policies.
When you use a local AI system, the workflow is fundamentally different:
The data never leaves your environment. There's no third-party server. No cloud upload. No network transmission of confidential content. The architectural problem that the Heppner court identified simply doesn't exist.
What the Legal Community Is Saying
Legal analysts covering the Heppner ruling have been remarkably consistent in their recommendations. Across major law firms and bar associations, the guidance converges on a few points: use closed AI systems with strict privacy policies, ensure AI use occurs at the direction of counsel, limit who has access to AI inputs and outputs, and maintain architectural isolation between confidential data and public services.
One widely-discussed analysis put it bluntly: the company that builds a zero-knowledge, architecturally isolated AI platform for professional work — one that works with existing privilege law rather than around it — will have an enormous market advantage. Not by inventing new legal frameworks, but by building technology that respects the ones that already exist.
What This Means in Practice
If you work with confidential documents — legal contracts, insurance policies, medical records, financial statements, compliance material — the Heppner ruling should change how you evaluate AI tools.
The questions to ask are no longer just "is this AI accurate?" or "does it save me time?" They now include:
Where does my data go?
If the answer involves a cloud server, a third-party API, or any system outside your direct control, your confidentiality may be compromised — not just ethically, but legally.
Who can access my inputs?
If the AI provider's terms of service allow them to log, store, or use your inputs for model training, you've potentially disclosed confidential information to a third party.
Can my AI usage be compelled in court?
After Heppner, the answer for public AI platforms is yes. For local, architecturally isolated systems, the analysis may be very different.
Do I control the model?
Cloud AI services can change their models, policies, and data handling at any time. A model running on your own hardware, processing your documents locally, gives you complete control over the entire chain.
The Local AI Alternative
This is exactly why we built Librarian as a local-first AI platform.
Librarian runs entirely on your machine. Your documents are indexed locally, stored in a local vector database, and processed by AI models running on your own hardware through Ollama. There are no cloud uploads, no API calls to external services, and no data transmission outside your environment.
When you ask Librarian a question about your documents, the entire process — from retrieval to reasoning to response — happens on your machine. The architectural concern that drove the Heppner ruling doesn't apply because there's no third party involved at any point.
For teams that need to share access, Librarian supports LAN mode — where a trusted computer serves as the AI server for devices on your local network. Even in this configuration, data stays within your physical network. It never touches the public internet.
And with FORGE, our training data generation and fine-tuning add-on, you can go further: create a custom AI model trained specifically on your documents, running locally, with domain expertise that no cloud service can match — because it was built from your data, on your hardware, under your control.
Looking Ahead
The Heppner ruling is the first major judicial decision addressing AI and privilege, but it won't be the last. As AI tools become embedded in professional workflows across law, insurance, finance, and healthcare, courts will continue to grapple with how existing confidentiality frameworks apply to new technology.
What seems clear is that the direction of these rulings will depend heavily on architecture. Cloud-based tools that route confidential data through public infrastructure will face ongoing scrutiny. Local tools that keep data within the user's environment will be in a fundamentally stronger position.
The professionals and organizations that recognize this now — and choose their AI tools accordingly — won't have to worry about the next ruling. Because they'll have already solved the problem.
Librarian is a private, local AI document assistant that runs entirely on your machine. No cloud. No uploads. No third parties. Start a free trial →
FORGE, our fine-tuning add-on, lets you train custom AI models on your own documents without any data leaving your environment. Learn more about FORGE →
Disclaimer: This article discusses a court ruling for informational purposes. It is not legal advice. Consult a qualified attorney for guidance specific to your situation.