Blog

Texas Joins the AI Regulation Wave: Key Employer Takeaways From the Texas Responsible Artificial Intelligence Governance Act

March 6, 2026
Estimated Read Time: 7 mins

Artificial intelligence (“AI”) technologies are rapidly transforming workplace practices—from recruitment and candidate screening to performance evaluations and operational decision-making. New technology breeds new regulation. And in this case, Texas is no exception. 

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (“TRAIGA”) into law. TRAIGA took effect on January 1, 2026, and although its primary focus is broad AI governance, the statute contains several provisions with significant implications for Texas employers who develop, deploy, or use AI systems.

A. Overview of TRAIGA: Applicability and Key Obligations

TRAIGA’s applicability is expansive, and extends beyond entities physically headquartered in Texas. The law apples to any “person or entity” that: (1) conducts business in Texas; (2) produces a product or service used by Texas residents; or (3) develops or deploys an AI system in Texas. Tex. Bus. & Comm. Code § 551.002(1)-(3). TRAIGA’s expansive scope means that even employers headquartered outside Texas should assess whether the law applies to them.

1. Definition of “Artificial Intelligence System”

TRAIGA defines an “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” Tex. Bus. & Comm. Code § 551.001(1). In short, an “AI system” is any apparatus that produces content or makes recommendations that might impact the virtual or physical world.

2. Prohibited Uses of AI

TRAIGA prohibits the development or deployment of an AI system for improper purposes. Such “improper purposes” include:

  1. Intentional discrimination on the basis of a protected class under state or federal law; 
  2. Manipulating human behavior or encouraging self-harm, violence, or criminal activity; 
  3. Infringement on federal constitutional rights; or
  4. Producing or distributing explicit content or child sexual abuse material.

Tex. Bus. & Comm. Code §§ 552.052, 552.055, 552.056, 552.057.

Notably for employers, TRAIGA clarifies that disparate impact alone is not sufficient to demonstrate prohibited discrimination, signaling an intent-focused enforcement theory. See Tex. Bus. & Comm. Code § 552.056(c). This statutory language ensures that the compliance question is not purely outcome based. Rather, the inquiry is twofold: (1) does the AI system produce adverse outcomes and (2) was the AI system adopted, designed, or configured with an unlawful discriminatory purpose.

Although TRAIGA’s plain language makes intent a central inquiry, this can still be problematic for businesses seeking certainty in compliance. Practically speaking, “intent” is frequently inferred from circumstantial evidence. So, an intent-based standard does not eliminate the need for rigorous measurement and monitoring of how the AI tools are functioning. Employers should assume that documentation gaps, failure to test known risk areas, or continued use after credible bias indicators emerge may be used to infer a statutory violation.

For example, if an employer receives repeated internal audit findings showing that an AI-driven performance scoring tool systematically downgrades employees within a protected class, and the employer does not implement corrective measures or a review, regulators may argue that the employer’s continued use of the AI tool evidences a knowing facilitation of discriminatory effects—even if the original program was neutral.

B. Enforcement Authority and Penalties

1. Enforcement Authority

TRAIGA vests enforcement authority exclusively in the Texas Attorney General, thereby eliminating private enforcement. Tex. Bus. & Comm. Code § 552.101. This is meaningful for employers because it concentrates risk in regulator-driven investigations rather than employee-initiated litigation. But the lack of a private right of action does not necessarily reduce overall exposure, because federal and state anti-discrimination claims remain available through other legal channels. Against that backdrop, TRAIGA investigations may generate documentation that may be discoverable or relevant in parallel litigation. Tex. Bus. & Comm. Code § 552.103.

2. Penalty Structure and Cure Period

Civil penalties under TRAIGA range from $10,000 to $200,000 per violation and the statute provides a 60-day cure period. Tex. Bus. & Comm. Code § 552.105. The cure structure introduces a quasi-supervisory compliance dynamic. Specifically, employers that can quickly isolate a problematic deployment, suspend or adjust it, and document remediation steps will be better positioned to avoid or reduce penalty exposure. The Texas Attorney General may also seek injunctive relief against further violations and may recover attorney’s fees and reasonable court costs or other investigative expenses. Tex. Bus. & Comm. Code § 552.105.

TRAIGA also encourages proactive compliance. Affirmative defenses are available for companies that self-detect and remedy issues through internal audits, employ third-party testing, or adhere to recognized standards such as the NIST AI Risk Management Framework.

C. Practical Guidance for Employers:

Employers seeking to lessen risk under TRAIGA should consider the following “do’s and don’ts”:

  1. Do Inventory and Classify All AI Systems in Use. Create a comprehensive list of AI tools used across human resources and business operations—including hiring algorithms, resume screeners, chatbots, scheduling tools, and performance metrics engines.
  2. Do Conduct Internal Risk Assessments. Perform risk assessments that evaluate how each AI tool affects protected classes, identifies potential bias, and documents the intended purpose and limitations of the system.
  3. Do Document Intent and Oversight. Maintain clear documentation showing the non-discriminatory intent behind AI deployments, including selection criteria for vendors, testing protocols, and review by legal and compliance teams.
  4. Do Implement NIST-Aligned Compliance Frameworks. Adopt risk management frameworks aligned with nationally recognized standards (e.g., NIST AI Risk Management Framework) to help establish affirmative defenses under TRAIGA.
  5. Do Train HR and Tech Teams. Ensure cross-functional training for HR, IT, and legal professionals on both the legal and ethical implications of AI deployment in employment contexts.
  6. Don’t Rely Solely on Vendor Representations. Third-party vendors may claim compliance with various federal or international standards, but employers should verify that those tools also comport with TRAIGA’s requirements and federal anti-discrimination law.
  7. Don’t Neglect Ongoing Monitoring. AI systems are dynamic. Avoid the trap of “set and forget.” Periodically reassess tools for bias, outdated data, or unintended effects that could trigger enforcement risk.
  8. Don’t Use AI Without Human Oversight. Automated decision-making without human review increases the risk of discriminatory outcomes. Employers should retain meaningful human oversight over AI outputs that influence employment decisions.
  9. Don’t Assume Intent Is Irrelevant. While TRAIGA emphasizes intent, courts and federal agencies often consider both intent and impact. Employers should still account for both when adopting and deploying AI.
  10. Don’t Ignore Longstanding Discrimination Laws. State and federal anti-discrimination statutes such as Title VII, the ADA, and the ADEA still govern employer conduct. An employer could be compliant with TRAIGA’s intent standard yet still face federal liability for a discriminatory AI system that produces disproportionate adverse impacts. Accordingly, it is prudent for employers to navigate two layers of AI compliance risk: state-level enforcement under TRAIGA and continued enforcement under federal employment laws.
  11. Do Engage Experienced Counsel To Navigate Risk. Given the fast-evolving legal landscape, employers should consult experienced counsel to design compliance strategies that fit their business and workforce needs.

TRAIGA represents a pivotal development in how AI systems will be regulated at the state level. Employers with Texas operations (or those that serve Texas workers or residents) must proactively evaluate and document their AI usage, align with risk-management best practices, and ensure ongoing oversight to mitigate both state and federal legal risks.

 
Tags: Artificial Intelligence, Texas Legislative Update, Labor and Employment

Disclaimer: This alert is provided for information purposes only and does not constitute legal advice and is not intended to form an attorney client relationship. Please contact your Sheppard attorney contact for additional information.

Share Via: