The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which was signed into law by Governor Greg Abbott on June 22, 2025, and is effective January 1, 2026, establishes a framework for regulating the responsible use of artificial intelligence (AI) in Texas, focusing primarily on governmental use and specific, high-risk private-sector practices. Against the backdrop of a proposed federal moratorium on state AI legislation and enforcement, true to Texas style, the state forged ahead with putting in place some guardrails on the use of AI technology.
The initial bill, introduced in December 2024, outlined a broad regulatory framework that would have placed substantial requirements on developers and users of “high-risk” AI systems, such as mandatory impact evaluations, extensive recordkeeping requirements, and detailed consumer notifications. After input from stakeholders and legislative debate, the final version shifted to a more targeted approach, focusing on preventing specific harmful outcomes while preserving Texas’s desire to maintain an innovation-friendly climate.
Bill Text: TX HB149 | 2025-2026 | 89th Legislature | Introduced | LegiScan
Scope of Application. TRAIGA applies to any person or entity that “promotes, advertises, or conducts business” in Texas or offers AI products or services to Texas residents, including developers, deployers, and distributors of AI systems.
Prohibitions/Restrictions. TRAIGA enacts a prohibited uses framework that prohibits AI systems designed or deployed to:
- Manipulate Human Behavior: Developers and deployers may not intentionally use AI to encourage a person to incite harm (whether self-harm or harm upon another individual) or criminal activity.
- Calculate a Social Score: Governmental entities may not use or deploy an AI system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics with the intent to calculate or assign a social score (or similar valuation of the person or group of persons) that could lead to negative treatment or discrimination or that would otherwise violate their US or Texas constitutional rights.
- Violate Constitutional Protections: Developers and deployers may not use an AI system “with the sole intent…to infringe, restrict, or otherwise impair” the US constitutional rights of any individual.
- Capture Biometric Data Without Consent: Governmental entities may not use AI with the goal of “uniquely identifying” an individual using biometric data sourced from publicly available information without the individual’s consent, if such use would infringe on rights protected by the US or Texas Constitution or violate state or federal laws.
- Produce Sexually Explicit Content and Child Pornography: Developers and deployers cannot use AI “with the sole intent of producing, assisting, or aiding in producing, or distributing” child pornography or unlawful deepfake videos and/or images.
- Unlawfully Discriminate: A person may not develop or deploy an AI system with the intent to unlawfully discriminate against a protected class in violation of state or federal law. The section does not apply to an insurance entity if the entity is subject to applicable statutes regarding unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.
Disparate Impact Is Not Sufficient. TRAIGA emphasizes intent as a critical factor for establishing liability, protecting developers from responsibility for third-party misuse while ensuring accountability for intentional wrongdoing. The bill expressly states that “disparate impact is not sufficient by itself to demonstrate” an intent to discriminate. This requirement is unique to TRAIGA as it relates to other state-specific AI laws.
Consumer Transparency. Governmental agencies are required to provide reasonable notice to consumers when an agency uses an AI system to interact with consumers either before or at the time of interaction. The disclosure must be clear and conspicuous and written in plain language. Use of hyperlinks to effectuate the disclosure is expressly permitted. There are additional and specific requirements if the AI system is used in relation to health care services or treatment.
AI Regulatory Sandbox Program. TRAIGA establishes a regulatory “sandbox” program overseen by the Department of Information Resources (DIR) (in consultation with the newly established Texas Artificial Intelligence Advisory Council, discussed further below), which means that AI developers are provided with a 36-month grace period to test their AI systems with temporary exemptions from specific regulatory obligations. Participants are required to provide quarterly updates on system functionality, risk management/mitigation strategies, and feedback from stakeholders. DIR has the authority to remove a participant from the program if obvious risks arise or in the event of any violation of federal or state laws/regulations. Notably, the Texas Attorney General (AG) cannot “file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.”
Enforcement and Penalties. The Texas AG has exclusive enforcement authority with respect to TRAIGA and is required to “create and maintain an online mechanism” on the AG’s website for consumers to use to submit complaints. Those found to be in violation of TRAIGA are entitled to a 60-day cure period following the AG’s provision of written notice of a violation, after which the AG may bring an action against the individual for the violation. Monetary penalties for violations range from $10,000 to $12,000 for curable violations and from $80,000 to $200,000 for uncurable violations.
Defenses from Liability. TRAIGA also provides certain defenses from liability. For example, a defendant cannot be found liable if the defendant “discovers a violation (of TRAIGA) through testing, including adversarial or red-team testing” or if they are in substantial compliance with the National Institute for Standards and Technology’s AI Risk Management Framework: Generative Artificial Intelligence Profile. For clarity, adversarial and red-team testing are cybersecurity and system evaluation techniques used to identify vulnerabilities, weaknesses, or flaws in systems, networks, applications, and AI models by simulating malicious or hostile actions.
The Texas AI Council. TRAIGA establishes the Texas Artificial Intelligence Advisory Council (Texas AI Council), “composed of individuals from the public who possess expertise directly related to the council’s functions.” The Texas AI Council is comprised of seven (7) members, chosen as follows: (i) three (3) members of the public appointed by the Texas governor; (ii) two (2) members of the public appointed by the Texas lieutenant governor; and (iii) two (2) members of the public appointed by the speaker of the Texas house of representatives. The Texas AI Council may not adopt or promulgate binding rules, impede or overrule the operation of any state agency, or perform any duties not expressly granted to it by TRAIGA; however, it may assist with conducting AI training for governmental entities and issuing advisory reports on AI ethics and compliance. It may also provide recommendations for future legislation and remain involved in oversight of the AI Regulatory Sandbox Program.
Developers and deployers of AI systems have until January 1, 2026, to come into compliance with TRAIGA. Our team at Eversheds Sutherland can assist you with a review of your current business practices, TRAIGA readiness, and regulatory compliance.
[View source.]