Is Your AI Chat History Private? A Federal Court Just Said No — What Every Taxpayer and Contractor Needs to Know
On February 10, 2026, a federal judge ruled that conversations with consumer AI tools like Claude and ChatGPT are not protected by attorney-client privilege and are fully discoverable by the government — including the IRS. If you have ever described a tax dispute, discussed unreported income, or shared your legal situation with a consumer AI chatbot, that conversation is likely not private, may be obtainable under subpoena, and could end up as evidence against you. This post explains the United States v. Heppner ruling, what it means for contractors and taxpayers in IRS or CDTFA disputes, and exactly what you should and should not be typing into an AI tool right now.
If you have used AI tools to research an IRS audit, CDTFA dispute, or tax controversy — you need a professional assessment of your exposure before you do anything else.
Book A Call With AdamI had a client call me two weeks ago — a roofing contractor in the Inland Empire, good business, five crews, doing about $5M a year. He had been in a CDTFA sales tax dispute for eight months. Smart guy. He had been using Claude to research his options, draft questions for me, and think through his defenses. He had shared some of what Claude told him with me via email.
When I told him what Judge Rakoff had just ruled, there was a long pause on the phone. "So they could actually get that?" Yes. They could. And in my reading of the code and 25 years handling these cases, the answer has been yes for a while — we just didn't have a court saying it clearly until now.
What most advisors are telling their clients right now: "just be careful what you type." That's not enough. Here is what the ruling actually says, what exposure actually looks like, and why I think this is going to change how every taxpayer under examination uses AI tools.
What Did the Judge Actually Rule in United States v. Heppner?
The case involves Bradley Heppner, a Dallas financial services executive charged with securities and wire fraud connected to the collapse of GWG Holdings — an alleged $150 million scheme. After his arrest in November 2025 and after retaining defense counsel at Quinn Emanuel, Heppner used the consumer version of Anthropic's Claude to research his legal situation. He prepared approximately 31 documents from those AI sessions and shared them with his lawyers.
When the FBI seized his devices, defense counsel asserted attorney-client privilege over the AI documents. The government moved to compel production. Judge Jed Rakoff of the Southern District of New York ruled from the bench on February 10, 2026, followed by a written opinion on February 17:
"I'm not seeing remotely any basis for any claim of attorney-client privilege."
— U.S. District Judge Jed S. Rakoff, United States v. Heppner, No. 25 Cr. 503 (S.D.N.Y.)
This is the first federal court ruling to directly address whether privilege attaches to materials generated through a consumer AI platform. The court gave three specific reasons for its decision:
What Is the Waiver Bomb That Most Coverage Is Missing?
The ruling gets most attention for the basic privilege holding. What gets less attention — and what I think is far more dangerous for clients currently in controversy — is the waiver analysis.
Heppner did not merely use Claude to research generic legal questions. He fed information he had received from his attorneys at Quinn Emanuel into Claude. The government argued — and Judge Rakoff agreed — that sharing privileged attorney-client communications with a third-party AI platform may constitute a waiver of the underlying privilege over those original attorney-client communications.
If you paste privileged communications from your tax professional or attorney into a consumer AI tool, you may have stripped privilege from those original communications — not just from the AI conversation itself. That's not a forward-looking problem. That's a problem you already have if you've done this.
In my experience handling CDTFA appeals and IRS collection matters, this is the scenario I am most concerned about for clients: someone under examination who has been using Claude or ChatGPT to think through their strategy, drafting email questions to me that they're running through AI first, or summarizing our conversations into an AI tool to get "a second opinion." Every one of those interactions potentially compromises the privilege over my communications with them.
If you're in an active IRS or CDTFA matter and you've been using consumer AI tools to research your situation, the time to get an assessment is before the government asks for discovery — not after.
Book A Call With AdamWhat Does This Mean for a Contractor Under IRS or CDTFA Examination?
This ruling was a criminal case involving securities fraud. I want to be precise here, because this matters: Judge Rakoff did not rule that Heppner applies to civil cases. His ruling was explicitly limited to a criminal defendant's use of a consumer AI platform. Legal commentators from Lexology, Proskauer, and the National Law Review all noted that whether the same holding necessarily applies in all civil contexts was left unanswered. What Rakoff did establish — and what does transfer across contexts — are the foundational privilege principles: AI is not an attorney, consumer Terms of Service disclaim confidentiality, and feeding privileged communications into AI likely waives the underlying privilege. Any civil court applying that same analysis would almost certainly reach the same result. The privilege is gone. The civil ruling is just not yet on paper. For the contractors and business owners I work with, here is how the exposure maps:
A word on who actually has privilege — because almost no one does
Before I map the risk for contractors, I want to address something that comes up constantly: the assumption that communicating with a tax professional creates privilege. It does not — and this is true regardless of which credential that professional holds.
As a CRTP, I have zero privilege with my clients. None. The IRS can summons my files, my workpapers, my emails, my notes — all of it, anytime. The same is true of a CPA and an EA. The only limited protection that exists for CPAs and EAs is the §7525 federally authorized tax practitioner privilege — and in my experience working controversy cases, it is nearly worthless. It applies only to non-criminal federal tax matters, it evaporates the moment a case has criminal exposure, and courts have consistently interpreted it as narrow. For practical purposes in any real controversy, §7525 provides nothing you can count on.
What about a tax attorney? Here is where it gets important. A tax attorney only has privilege when functioning as a lawyer — giving legal advice, developing legal strategy. When a tax attorney is acting as a tax preparer or compliance professional — running numbers, preparing returns, doing standard planning work — courts have consistently held the privilege does not attach. The dominant-purpose test controls: what was the communication primarily for? Legal advice or tax preparation? If tax prep, no privilege, even if a licensed attorney did the work. (United States v. Frederick, 7th Cir. 1998.)
The one narrow exception is the Kovel doctrine (United States v. Kovel, 2d Cir. 1961): an attorney can extend privilege to a third-party accountant or advisor, but only if that advisor is working under the attorney's direction to help the attorney give legal advice — not if the client independently hires a CPA or CRTP. A Kovel arrangement requires intentional, formal structure. It does not happen by accident.
The point for this post: if your privilege chain is already thin or nonexistent — which it is for most taxpayers working with a CRTP, CPA, or EA — then the AI privilege question is almost academic. You already have nothing to waive. The more pressing question is what you said, and whether it creates exposure on its own terms.
Unreported income or cash transactions
If you asked Claude or ChatGPT whether the IRS would catch cash deposits you haven't been reporting, whether your method of separating income is detectable, or how to structure transactions to reduce your paper trail — that conversation is not privileged. Under Heppner, it is likely discoverable. It may read as a written admission of intent to the government.
Sales tax and CDTFA disputes
In my reading of the code and based on the Heppner holding, any conversation where you described your sales tax practices, your record-keeping gaps, or your theory of the dispute to a consumer AI tool is potentially accessible to the CDTFA in a Notice of Determination proceeding or appeal. The CDTFA has broad administrative subpoena authority. This is not theoretical.
IRS collection matters and installment agreements
If you have been using AI to model your financial disclosures for a Collection Due Process hearing, to think through what assets to list on a Form 433, or to research how the IRS values certain assets — that research trail is potentially obtainable. The government's interest is in your actual financial position, not the position you ultimately disclosed. AI conversations showing you modeled different scenarios could be used to challenge the accuracy of your formal disclosures.
The scenario I am watching for in my own caseload: a client who described their CDTFA situation in detail to a consumer AI tool, received guidance that influenced how they characterized transactions in their formal response, and then shared that response with me. Under Heppner's waiver analysis, the AI session that shaped the response — and potentially my communications with them about it — may now be compromised. If this describes your situation, call before the other side requests discovery.
What Happens If You Told an AI About Something Illegal You Are Doing?
This is the question nobody wants to ask, but it is the one with the highest stakes. Heppner was a criminal case — securities fraud. And that distinction matters more than most coverage acknowledges.
Here is the honest answer, in my reading of the code and based on my experience in controversy work: the real danger of AI chat exposure is not the civil audit. It is what you said when you thought no one was listening.
The criminal case scenario — the one that actually ends careers
If a business owner describes an ongoing illegal scheme to an AI chatbot — unreported cash, fabricated invoices, offshore accounts, payroll fraud, whatever it is — that conversation is not privileged. Full stop. If that business later becomes the subject of a criminal referral to the Department of Justice, prosecutors can subpoena Anthropic or OpenAI directly. They do not need to go through the taxpayer. They do not need to give the taxpayer advance notice if a criminal investigator issues the summons. Under IRC §7609(c)(2)(E), a criminal investigator can issue a summons to a third party without the taxpayer notification requirement that applies in civil examinations.
The practical result: a business owner who described his cash scheme in detail to ChatGPT, who later gets referred to IRS Criminal Investigation, may have handed prosecutors a confession they did not know existed. He told a machine. The machine told its server. The server is now a federal witness.
The most dangerous thing you can type into a consumer AI tool is not something about your audit. It is something about what you are actually doing that you have not reported. That conversation is the one that turns a civil examination into a criminal referral.
The civil audit — the current picture is more nuanced
Here is where I want to be precise, because most coverage either panics people or completely understates the risk. For a routine IRS civil examination — a correspondence audit, a field exam, even a CP2000 notice — the question of whether an agent can actually obtain your AI conversations is more complicated than Heppner alone suggests.
Heppner established that AI conversations are not privileged. It did not by itself give IRS revenue agents a new tool. The court ruling addresses privilege — whether you can block production of AI records you already have. The separate question is whether an IRS revenue agent can issue an administrative summons to Anthropic or OpenAI to demand your conversation history in a routine civil audit.
Under IRC §7602, the IRS has broad summons authority. The statute allows it to summon any person who possesses records that "may be relevant" to a tax investigation. Under United States v. Powell, 379 U.S. 48 (1964), the IRS must establish that: the investigation has a legitimate purpose; the information sought is relevant; the information is not already in IRS possession; and the required administrative steps have been taken. A court reviewing a summons to Anthropic in a civil audit would ask whether your AI conversation history could "throw light on" the subject under investigation — a low but not non-existent bar.
The honest answer is: no court has ruled specifically on whether an IRS civil summons to an AI company is enforceable in a routine examination. It is legally untested. In my reading of the code, the authority probably exists — IRC §7602 is deliberately broad, and courts have historically enforced summonses to banks, accountants, and other third parties under the Powell standards. But the practical likelihood of a revenue agent issuing a summons to Anthropic in a garden-variety audit is, for now, low. The administrative hurdles, the public relations sensitivity, and the novelty of the question all work against routine use.
For CDTFA audits, the analysis is similar. California's Board of Equalization successor has administrative subpoena authority, but summoning a tech company's AI servers would be a significant escalation from normal audit practice. It has not happened yet, as far as my reading of current proceedings goes.
Criminal cases: The risk is real, present, and immediate. If you have described illegal activity to a consumer AI tool and you are under criminal investigation or risk becoming one, that conversation needs to be assessed by a professional now. No prediction about what the government will do — only the acknowledgment that it can.
Civil IRS audit: Legally possible under IRC §7602, practically unlikely in a routine examination today — but the window is open, the legal authority is plausible, and the trend is one direction. "Unlikely today" is not the same as "safe forever."
CDTFA civil dispute: Same analysis as civil IRS. Untested, possible, directionally concerning.
Can an IRS Agent Actually Summons Your AI Chat History? A Plain-Language IRC §7602 Answer
Because I get this question directly: yes, under existing law, an IRS agent probably has the authority to summons Anthropic or OpenAI for your conversation records. Here is the specific legal framework:
IRC §7602(a) authorizes the IRS to issue a summons to "any person" possessing "books, papers, records, or other data" that "may be relevant" to its investigation. The Supreme Court confirmed in Arthur Young & Co., 465 U.S. 805 (1984), that "may be relevant" is intentionally broad — items of even "potential relevance" qualify. AI conversation records describing your business practices, financial decisions, or tax strategies would easily satisfy this standard in a court enforcement proceeding.
The third-party summons procedure under IRC §7609 requires the IRS to notify you within three days of serving the summons on the third party. You then have 20 days to file a petition to quash it in U.S. District Court. You could challenge the summons on relevance grounds, improper purpose grounds, or any remaining privilege arguments — but after Heppner, the privilege argument is significantly weakened for consumer AI conversations.
There is one important limitation: IRC §7602(d) prohibits the IRS from issuing or enforcing a civil summons once a criminal referral to the Justice Department is in effect for that taxpayer. So there is a point at which the civil and criminal tracks diverge — and once a case is referred criminally, the civil summons authority closes. This does not help you in the period before referral, and criminal investigators have their own summons authority under the exception of IRC §7609(c)(2)(E) that bypasses the taxpayer notice requirement entirely.
Bottom line on IRC §7602: the legal authority to summons AI records in a civil examination likely exists under current statute. Whether any IRS agent will actually use it is a different question — one that will be answered in the next few years as this area develops.
Which AI Packages Actually Have Confidentiality Protection — and What Does That Get You?
This is where the practical guidance gets complicated, and where I think a lot of advisory commentary is misleading people. Here is the honest breakdown:
Consumer tier (Claude Free, Pro, Max / ChatGPT Free, Plus, Pro)
No contractual confidentiality. Data may be used for model training by default (opt-out available but does not eliminate disclosure rights). Retention up to 30 days if training is off; up to five years if you have training enabled. These are the accounts Judge Rakoff's opinion is specifically about. Not suitable for anything sensitive.
Claude for Work / Team accounts
This is where many small businesses make a costly mistake. Team accounts sound like enterprise protection. They are not. Team accounts are still consumer-tier under Anthropic's contract framework. Data is not used for model training by default — but the Terms of Service still permit disclosure to governmental authorities. The Heppner reasoning applies: no contractual confidentiality clause that would rebut the government's position. Better than consumer, but not enterprise-grade protection.
Claude Enterprise / Commercial API with Data Processing Agreement
This is the tier that at least partially changes the legal analysis. Claude Enterprise operates under Commercial Terms of Service that explicitly prohibit model training on customer data. Enterprise customers can negotiate a Zero Data Retention (ZDR) addendum — a signed contract under which Anthropic does not persist inputs or outputs beyond real-time abuse detection processing. No data stored means, in theory, nothing to produce in response to a summons.
Important caveats: (1) ZDR requires a separately signed contract — it is not automatic even for Enterprise accounts; (2) even under ZDR, Anthropic retains User Safety classifier results, meaning some data is always kept to enforce its Usage Policy; (3) even with enterprise confidentiality, the absence of an attorney-client relationship means privilege still does not attach to AI-generated content — Heppner's basic holding survives; and (4) a signed contract saying Anthropic won't store your data does not bind the government — it means there is nothing to produce, which is different from privilege protection. If a subpoena arrives and nothing was stored, the subpoena yields nothing. That is useful, but it is not the same legal protection as privilege.
Also notable from Anthropic's own Privacy Center: ZDR applies only to Enterprise API and products using a Commercial organization API key. It does not cover Claude.ai web sessions, Claude Work UI sessions, or beta products unless explicitly added by contract. This creates a trap for firms that think they are Enterprise-protected but whose employees access Claude through the web interface.
Will Congress Pass New Laws to Protect AI Conversations?
The legislative picture is active but not yet resolved — and the direction of federal policy in 2026 is, if anything, away from additional privacy protections, not toward them.
Congress currently has three bipartisan AI bills in circulation: the GUARD Act (S.3062), the CHAT Act (S.2714), and the SAFE Act. All three focus on minors, chatbot disclosure, and companion AI safety. None address the privilege or discovery question raised by Heppner. They are not privacy legislation in the sense that would protect taxpayers from IRS summonses.
At the federal level, President Trump's January 2025 Executive Order on AI policy is oriented toward deregulation and innovation, not data protection. It has been used to push back against state AI regulations — including Utah's disclosure requirements — suggesting the federal posture is not toward new consumer protections that would limit government access to AI records.
State legislation is more active: over 300 AI-related bills were filed in state legislatures in the first month of 2026 alone. California, New York, and Utah have enacted chatbot disclosure laws. But none of these address the specific summons and privilege question the Heppner ruling raises. They require disclosure that AI is being used; they do not create new confidentiality protections for AI conversation records in the face of government subpoena.
The more likely path to reform is judicial, not legislative: the Second Circuit's eventual review of Heppner (or a case with better facts), or Congress passing a specific amendment to IRC §7602 that carves out AI conversation records from administrative summons authority. Neither is imminent. The American Bar Association and several privacy advocacy organizations have flagged this gap, but flagging a gap and closing it are different things.
My honest assessment: do not plan your risk management strategy around Congress acting. Plan it around the law as it exists today, which is: AI conversations are not privileged, IRC §7602 authority likely reaches AI records, and the government's ToS argument is well-grounded. The legislative environment might change this in three to five years. It might not. You have tax filings and examinations happening now.
Is the Heppner Ruling Settled Law — or Are Critics Right That It Goes Too Far?
This is where most coverage stops. What the news summaries are not telling you is that a serious segment of the legal community thinks Judge Rakoff got parts of this wrong — or at least went further than the facts required. Here is the honest picture of where the debate stands, because I think clients deserve to understand this is not fully resolved:
Several legal commentators argue that the court treated AI-generated documents as categorically equivalent to Google searches, when in practice they function more like a client's handwritten notes in preparation for a meeting with counsel. If Heppner had typed the same 31 documents in Microsoft Word and sent them to Quinn Emanuel, they might well have been protected work product. The medium shouldn't determine the privilege outcome — but under Heppner, it does.
The court's reliance on Anthropic's privacy policy to defeat the confidentiality requirement has drawn pointed criticism. Courts have historically been skeptical of using fine-print clickwrap agreements to override substantive procedural rights. Some scholars argue that attorney-client privilege — which serves systemic interests in the administration of justice — should not be waivable through a terms-of-service clause that 99% of users never read.
Heppner has not been reviewed by the Second Circuit. It is the ruling of a single district court judge, and while Judge Rakoff is one of the most prominent federal trial judges in the country, his decisions are regularly appealed. The waiver analysis in particular is seen by some practitioners as legally overextended. Watch this space — appellate review could significantly narrow or reframe the holding.
Judge Rakoff's opinion is explicitly tied to the Terms of Service of consumer-tier Claude. Claude Enterprise and ChatGPT Enterprise have contractual confidentiality guarantees and zero-data-retention options that directly address the court's primary concern. Critics argue this creates a two-tier system where only clients with enterprise AI access can potentially protect privilege — an outcome that maps badly onto how most individual taxpayers actually use these tools.
In my reading of where this goes: the basic privilege holding — AI is not an attorney, consumer ToS disclaim confidentiality — is well-grounded and likely to survive appellate review. The waiver analysis is more aggressive and may be narrowed. But "more aggressive" does not mean "wrong," and I would not advise any client to rely on the hope of a future Second Circuit reversal as a strategy for their current examination.
What Should a Contractor or Taxpayer Actually Do Right Now?
Most of what I'm reading on this topic stops at "be careful." That's useless. Here is the concrete answer based on my experience in controversy work:
If you are not currently under examination
Stop using consumer AI tools (Claude Free/Pro/Max, ChatGPT Free/Plus/Pro, Gemini, Copilot) for anything related to your tax situation, your business's cash handling, your revenue categorization, or any scenario where you would not want the IRS reading your thoughts. Use AI for general business questions, drafting emails, project planning — not for tax strategy or controversy analysis.
If you are currently under IRS or CDTFA examination
Do not use consumer AI tools for anything related to your case, full stop. Not for research. Not for drafting questions to your tax professional. Not for modeling scenarios. If you have already done this — tell your tax professional now, not later, so exposure can be assessed before discovery becomes an issue.
If you have pasted attorney or tax preparer communications into an AI tool
This is the waiver scenario. Contact your tax professional or attorney immediately. The scope of the damage depends on what was shared, with which platform, and how much of your formal record was shaped by those AI sessions. There may be steps to take now that would not be available later.
What actually provides protection
The safest approach for legal and tax research is to have your attorney or qualified tax professional use enterprise-grade AI tools on your behalf — where the AI is functioning as a tool of counsel, not an independent resource you are consulting directly. In my own practice, I use AI as a drafting and research aid under appropriate professional supervision. The client never interacts with the AI directly for controversy matters. The AI is my tool, not a substitute for the professional relationship.
Frequently Asked Questions About AI Chat Privacy and Tax Disputes
Can the IRS access my ChatGPT or Claude conversations about my taxes?
Yes, under current case law following United States v. Heppner (S.D.N.Y. Feb. 10, 2026), conversations with consumer AI tools like ChatGPT and Claude are not protected by attorney-client privilege and are subject to compelled disclosure. Anthropic's and OpenAI's Terms of Service explicitly state that user inputs may be disclosed to governmental regulatory authorities. If the IRS subpoenas Anthropic or OpenAI in connection with an examination, the platform can — and under their current policies, must — comply. If you have discussed unreported income, cash transactions, offshore accounts, or ongoing audit matters with a consumer AI tool, those conversations are likely obtainable by the government.
Does attorney-client privilege protect my AI conversations about a tax dispute?
No. In United States v. Heppner, Judge Jed Rakoff ruled that communications with an AI tool are not protected by attorney-client privilege for three reasons: (1) an AI is not a licensed attorney, so no attorney-client relationship can exist; (2) Anthropic's Terms of Service disclaim any expectation of confidentiality; and (3) sending AI-generated documents to your attorney after the fact does not retroactively create privilege. Even more critically, if you paste privileged communications from your attorney into a consumer AI tool, you may waive privilege over those original attorney-client communications themselves.
What happens if I described my IRS audit situation to an AI chatbot?
In my experience handling tax controversy cases, this is a serious problem that needs immediate attention before it becomes a bigger one. The conversation is likely not privileged and is potentially discoverable. If you are currently under IRS examination or in a CDTFA dispute, you should contact your tax professional before continuing to use consumer AI tools for anything related to your case. The scope of what was disclosed, when, and to which platform all matter. The correct next step is to get a professional assessment of your exposure — not to continue using AI tools to research the problem.
Is my AI conversation about my medical condition or disability claim discoverable in a lawsuit?
Yes, and this surprises most people. HIPAA protects medical information held by covered entities — healthcare providers and insurers. Consumer AI companies like Anthropic and OpenAI are not HIPAA-covered entities. Medical information you share with a consumer AI tool is governed by that platform's Terms of Service, not HIPAA. In civil litigation — disability discrimination claims, workers' compensation disputes, personal injury cases — opposing counsel can request your AI conversation history in discovery. Courts have signaled they will compel production.
Does using the paid version of ChatGPT or Claude protect my conversations from discovery?
No. The Heppner ruling applies to consumer-tier AI tools including both free and paid individual plans. Claude Pro and ChatGPT Plus both operate under Terms of Service that permit disclosure to governmental authorities. Paying for a subscription does not create a confidentiality agreement. The only tier that potentially changes the legal analysis is enterprise-grade AI with contractual zero-data-retention provisions — and even then, the absence of an attorney-client relationship means privilege still does not attach to AI-generated content.
What AI tools can I safely use for sensitive tax or legal matters?
The safest approach is to have your attorney or qualified tax professional use enterprise-grade AI tools on your behalf — where the AI is functioning as a tool of counsel, not an independent resource you are consulting directly. In my practice, I use AI tools as a drafting and research aid for clients in controversy matters — the client never interacts with the AI directly. If you are using consumer AI tools for business tax planning, stop until you have spoken with your tax professional about what you have already disclosed.
How This Connects to Your Controversy Defense
The Heppner ruling is a procedural development, but its implications run directly into the substance of how we defend clients in IRS and CDTFA matters. Knowing what is and is not discoverable — and managing the record before the government requests it — is a core part of effective tax controversy strategy. For contractors in a CDTFA dispute or IRS collection matter, exposure from prior AI conversations is now a factor in case assessment. Read more on our blog, or if you want to understand how AI tool exposure fits into your current situation, the right conversation starts with a fractional CFO or controversy review.
If You're in an Active Tax Matter, This Ruling Affects You Now.
The time to assess AI-related exposure in an IRS or CDTFA dispute is before the government asks for discovery — not after. In my experience, the conversations that hurt clients are the ones nobody thought to flag until it was too late.
Adam Libman is a California Registered Tax Preparer (CRTP) — not a CPA, not an EA, not a lawyer. Nothing in this post constitutes legal advice. In my reading of the code and based on my experience in controversy matters, the positions above reflect how I am advising clients today.