Texas Responsible AI Governance Act Enacted

Wiley Rein LLP
Contact

Wiley Rein LLP

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA or the Texas AI Act) into law. The new law goes into effect January 1, 2026. The law places obligations and restrictions on government use of AI and prohibits a person from developing or deploying AI systems for certain illegal purposes. The new law also amends existing privacy laws to address AI-specific issues. Finally, the law establishes the Texas Artificial Intelligence Council and creates a regulatory sandbox program for artificial intelligence systems.

Below we provide a high-level breakdown of the Texas AI Act and identify enforcement procedures that may be relevant to developers and deployers of AI systems.

Obligations and Restrictions on Government Agencies

Notice Requirement. TRAIGA places a notice requirement on “governmental agenc[ies]” that use an “artificial intelligence system” to interact with consumers before or during the interaction “regardless of whether it would be obvious to a reasonable person that the person is interacting with an [AI] system.” Such notice must be clear and conspicuous, written in plain language, and “may not use a dark pattern.” The law defines “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

Prohibited Uses of AI Systems. Government entities are prohibited from using AI systems for social scoring and biometric identification of specific individuals without consent.

First, governmental entities are prohibited from using an AI system that evaluates or classifies a person based on social behavior or personal characteristics “with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person,” if such classification results in (1) detrimental or unfavorable treatment of a person in a social context unrelated to the context in which the behavior or characteristics were observed or noted; (2) detrimental or unfavorable treatment of a person that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or (3) infringes on constitutional rights.

Second, government entities must not develop or deploy AI systems for the purpose of uniquely identifying a specific individual “using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source” without the individual’s consent if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law. Biometric data includes a “fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual,” but excludes physical or digital photographs, video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations.

Restrictions on Individual Development and Deployment of AI Systems

The Texas AI Act also more generally prohibits the development and deployment of AI systems for certain criminal and harmful activities. Specifically, TRAIGA prohibits development and deployment of AI systems that are intentionally designed to (1) harm another person; (2) engage in criminal activity; (3) infringe, restrict, or otherwise impair a person’s constitutional rights; (4) unlawfully discriminate against a protected class (with some exclusions in scope for insurance entities and financial institutions that are separately regulated); and (5) produce and distribute certain sexually explicit conduct and child pornography.

Amendments to Existing Texas Privacy Laws

TRAIGA also amends the Texas biometric privacy law to clarify the application of that law’s notice and consent requirements. Additionally, the law is amended to expand the biometric privacy law’s exemptions, including nuanced exemptions for (1) training and developing AI systems (unless those systems are being created for the sole purpose of uniquely identifying a specific person) or (2) in the development or deployment of an AI system that is designed to prevent and protect against or investigate, report, or prosecute a person responsible for security incidents, identity theft, fraud, or other illegal activity or to preserve the security of a system.

In addition to the amendments to the biometric privacy law, TRAIGA also makes a targeted amendment to Texas’ comprehensive privacy law to clarify that processors are obligated to assist controllers to meet applicable requirements for personal data collected, stored, and processed by an AI system.

Enforcement of TRAIGA

TRAIGA provides the Texas Attorney General with exclusive enforcement rights. The law explicitly states that there is no private right of action, though the Attorney General is required to create and maintain an online mechanism for consumers to submit complaints related to alleged violations of this law. The Attorney General may choose to issue civil investigative demands in accordance with those complaints to determine if a person violated this law. Individuals facing alleged violations of this law may rely on a rebuttable presumption of reasonable care.

The Texas AI Act provides some limitations on the Attorney General’s enforcement actions. First, if a person is deemed in violation of the statute, the Attorney General must provide notice to the party, who has 60 days to cure and remedy the violation before any enforcement action can be brought. Second, an action to collect a civil penalty may not be brought against a person for an AI system that has not been deployed. Third, individuals may not be found liable if another person used the AI system in a prohibited manner or if the individual discovers a violation via (1) feedback from a developer, deployer, or other person who believes a violation has occurred; (2) testing, including adversarial or red-team testing; (3) guidelines set by applicable state agencies; or (4) compliance with the most recent version of the NIST AI Risk Management Framework: Generative AI Profile or other nationally or internationally recognized framework.

Regulatory Sandbox Program and Texas Artificial Intelligence Council

The Texas AI Act establishes the Texas Artificial Intelligence Council, a seven-member, appointed expert body under the Texas Department of Information Resources. The Council’s aim is to ensure AI systems are ethical and developed in the public’s best interest and ensure AI systems do not harm public safety or undermine individual freedoms. The Texas AI Act also directs the Council to identify existing laws and regulations that impede innovation in the development of AI systems and recommend appropriate reforms, as well as to analyze opportunities for the State to improve government operations through AI systems. The Council may issue reports to the legislature regarding the use of AI systems and is authorized to conduct training programs and educational outreach for state agencies and local governments on the use of AI systems. However, the Council may not adopt rules or promulgate binding guidance; interfere with or override the operation of a state agency; or perform a duty or exercise power not granted by the TRAIGA.

The new law directs the Texas Department of Information Resources, in consultation with the Texas Artificial Intelligence Council, to create a regulatory sandbox program that provides participants with legal protection and limited access to the market to test innovative AI systems without obtaining a license, registration, or other regulatory authorization. The sandbox program is designed to (1) promote the safe and innovative use of AI systems across various sectors; (2) encourage responsible deployment of AI systems while balancing the need for consumer protection, privacy, and public safety; (3) allow an AI system to be tested with certain parameters while certain laws and regulations related to testing are waived or suspended; and (4) support engagement in research, training, testing, or other pre-deployment activities to develop an AI system.

***

The Texas AI Act highlights state scrutiny of AI development and deployment, and could signal a ramp-up of AI-related enforcement activity in Texas, in parallel with ongoing enforcement of its privacy laws. The Texas law adds to the growing patchwork of AI laws passed over the last year, including in Colorado. As more state laws emerge, companies will need to keep an eye out for new obligations that might impact their development and use of AI.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Wiley Rein LLP

Written by:

Wiley Rein LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Wiley Rein LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide