Blog

The AI Knows Too Much: When Employees Feed Trade Secrets into Generative AI Tools

March 25, 2026
Estimated Read Time: 5 mins

Every time an employee pastes proprietary source code, a customer list, or a confidential business strategy into ChatGPTClaude, or Google Gemini, they may be quietly dismantling the legal protections that make those secrets worth protecting. Courts and regulators are only beginning to grapple with this problem, and right now, the burden of preventing it falls squarely on employers.

The Legal Stakes

Under the federal Defend Trade Secrets Act (“DTSA”) and the Uniform Trade Secrets Act (“UTSA”) as adopted across most states, a trade secret plaintiff must show that the information at issue was subject to reasonable measures to maintain its secrecy. Courts have historically credited measures like confidentiality agreements, physical access controls, and employee training—but those safeguards were designed for a world of thumb drives and disgruntled employees. They were not built for a world where a well-meaning engineer can, in seconds, transmit an entire corpus of proprietary data to a third-party AI platform operating under terms of service that may permit the provider to use inputs for model training.

The issue is not hypothetical. Even setting aside the question of whether a vendor actually uses inputs for training, the mere act of entering trade secrets into a public generative AI tool may itself threaten their protected status. In February 2026, the U.S. District Court for the Southern District of New York addressed a closely related question in United States v. Heppner, holding that attorney-client privilege did not extend to documents a party had prepared using Claude (Anthropic’s generative AI platform) and later shared with their attorney. The court observed that Anthropic’s Privacy Policy permits the sharing of users’ personal data with certain third parties, and concluded that users of AI “do not have substantial privacy interests” in their communications with public AI platforms.

The trade secret implications are direct. Just as a privilege holder cannot claim confidentiality over communications routed through a third party with independent access rights, a company that inputs trade secrets into a public AI tool—particularly one that cannot guarantee confidentiality—risks a finding that it voluntarily disclosed that information to an outside party. That finding would be fatal to the reasonable measures element of any subsequent trade secret claim. Heppner arose in the privilege context, but its underlying logic is one that courts and opposing counsel will predictably deploy in trade secret litigation. Beyond litigation risk, however, employers must also contend with labor law constraints when crafting their response.

NLRA/NLRB Risks in AI Acceptable Use Policies

Employers drafting AI acceptable use policies must navigate an additional constraint: the National Labor Relations Act. The NLRB has made clear that overbroad workplace policies that could reasonably chill employees from discussing wages, working conditions, or collective activity are unlawful regardless of employer intent. 

AI policies must be narrowly tailored to protecting legitimate business interests—specifically, trade secrets and proprietary information—and employment counsel should review any policy before it goes live. A blanket ban on all AI tool use, or a sweeping confidentiality mandate that captures AI-generated content without limitation, can draw scrutiny if employees or unions argue the policy restricts protected concerted activity. Getting this balance wrong can transform a trade secret protection effort into an unfair labor practice charge.

Building a Defensible Program

While the “reasonable measures” standard does not require perfection, it requires reasonableness in light of the company’s circumstances and the value of the information at stake. Critically, that standard is evaluated at the time of the alleged misappropriation. A policy adopted after a disclosure event provides no retroactive protection. The following steps go beyond basic agreement updates and are most likely to be credited by courts in the AI context.

  1. Written AI Acceptable Use Policy. Identify categories of information that may not be entered into external AI platforms, such as source code, customer lists, financial projections, and M&A targets, and distinguish between approved enterprise tools and consumer-facing tools. Separately, require written employee acknowledgment at onboarding and annually.
  2. Vendor Audit and Enterprise Agreement Review. Audit the terms of service and data processing agreements for every AI tool in use, focusing on whether the vendor retains training rights over inputs, what security certifications apply, and whether the enterprise product has adequate data isolation from the consumer version.
  3. Technical Controls. Policies alone are insufficient. Data Loss Prevention (DLP) tools configured to block uploads of sensitive data categories to unapproved platforms, network-level restrictions on consumer AI sites from corporate devices, and audit logging of AI tool use are the kinds of technical measures courts are most likely to credit.
  4. Targeted, Documented Training. General confidentiality training that predates the AI era is not adequate. Scenario-based training that concretely illustrates what kinds of prompts create risk and why, should be delivered and documented.
  5. Updated Employment and IP Agreements. Confidentiality and IP assignment agreements should be reviewed and updated to expressly address generative AI, making clear that trade secret obligations apply equally to disclosure through AI prompts, and that AI-generated outputs incorporating proprietary information remain company IP.

Key Takeaways for Employers

Employee intent is largely irrelevant; the well-meaning engineer who debugged proprietary code using an unapproved AI tool has created the same legal problem as a bad actor who deliberately exfiltrated data. The message from United States v. Heppner—that users of public AI platforms do not have substantial privacy interests in what they share with those platforms—is one that courts are likely to find persuasive well beyond the privilege context. Companies that treat AI governance as a trade secret protection issue, not merely a technology policy, and build the vendor, technical, and training infrastructure to match will be better positioned both to protect their most valuable assets and to pursue DTSA claims if protection fails.

Tags: Artificial Intelligence, Trade Secrets

Disclaimer: This alert is provided for information purposes only and does not constitute legal advice and is not intended to form an attorney client relationship. Please contact your Sheppard attorney contact for additional information.

Share Via: