Effective January 1, 2026, the Texas Responsible Artificial Intelligence Governance Act (TX H.B. 149, 2025) takes a unique approach to AI regulation—pulling threads from the EU AI Act, Colorado's comprehensive AI statute, and national innovation policy, while weaving in Texas-specific priorities.
A Conservative State’s Answer to AI Governance
While many jurisdictions move toward comprehensive, often risk-based, AI regulatory frameworks, Texas has opted for a more targeted and politically aligned approach. The Texas Responsible Artificial Intelligence Governance Act (the “Act”) introduces foundational rules for the development and use of AI technologies—particularly by government actors and in consumer-facing contexts—while leaving significant room for commercial innovation with minimal private-sector burdens.
This law blends select provisions from the EU AI Act (e.g., prohibited uses and a testing sandbox), Colorado's AI Act (e.g., transparency obligations and a dual focus on developers and deployers), and elements of federal policy, such as President Trump's Executive Order 14179 https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/ encouraging AI innovation.
Key Distinctions from Other AI Laws
- No Risk-Based Framework: The Act does notcategorize AI systems by risk level. Unlike the EU or Colorado, Texas does not define or regulate “high-risk” AI, nor does it link regulation to the provision or denial of key opportunities or services such as employment, housing, lending or healthcare.
- Intent-Based Discrimination Standard: The Act prohibits intentional discrimination using AI systems, but disparate impact alone is not sufficient to show unlawful behavior. This marks a divergence from Colorado’s law. It also aligns with the White House disfavor of the disparate impact theory. https://www.whitehouse.gov/presidential-actions/2025/04/restoring-equality-of-opportunity-and-meritocracy/
- Narrow Transparency Requirements: Disclosure obligations are limited to government agencies and healthcare providers using AI in relation to service or treatment. Unlike broader AI laws, most private businesses have no transparency requirements under this Act.
Scope and Applicability
Unlike the EU AI Act and the Colorado AI Act, Texas’ AI Act does not apply to employment or commercial contexts. As a result, businesses do not need to provide notices or meet specific transparency or nondiscrimination requirements with respect to individuals acting in an employment or B2B context.
The Act applies to both “developers” and “deployers”, although both terms are defined self-referentially and, notably, neither “develop” nor “deploy” is defined. This may raise potential interpretive issues and leave some ambiguity for compliance and enforcement. The Act limits most of its restrictive provisions to state government entities and healthcare providers. As a result, most private-sector organizations remain largely unencumbered by many of the Act’s more prescriptive rules.
Focus on Biometric Identifiers
The Act makes several important amendments to Texas’s existing biometric data laws, a longstanding focus of the state. Section 503.001 of the Texas Business and Commerce Code requires informed consent to capture, store, or use biometric identifiers for commercial purposes, but the Act clarifies that the existence of a biometric identifier on the Internet or other publicly available source does not constitute consent to use the information unless it was made publicly available by the individual. In addition, as amended by the Act, the consent requirement does not apply to the use of biometric data (i) to develop or train AI systems that are not used or deployed for the purpose of uniquely identifying individuals, or (ii) to develop or deploy AI systems for preserving the integrity or security of a system or for the purpose of preventing or detecting fraud, harassment, identity theft, or other malicious, deceptive, or illegal activities, or investigating, reporting or prosecuting a person responsible for such activities.
The Act also prohibits government entities from using AI technology to identify individuals based on biometric data, or gathering media or images from the Internet or other publicly available source without the individual’s consent if the gathering would infringe on any individual right.
Expressly Prohibited AI Uses
Consistent with the EU AI Act’s prohibited uses framework, the Texas Act bans the use of AI to engage in or aid specific practices, including:
- social scoring by government entities that leads to adverse treatment or rights infringements;
- generation or distribution of Child Sexual Abuse Material (CSAM), including deepfake generation and explicit text-based outputs;
- developing or deploying an AI system with the intent for the system to infringe, restrict or otherwise impair an individual’s rights guaranteed under the US Constitution; and
- manipulation of human behavior to incite or encourage a person to commit physical self-harm, harm to another person, or engage in criminal activity.
Bias and Discrimination
The Act also prohibits covered persons from developing or deploying development or deployment of AI systems with the intent to unlawfully discriminate against protected classes in violation of state or federal law. Protected classes include race, color, national origin, sex, age, religion or disability. The prohibition is intent based. Demonstrating that the AI system has a disparate impact on a protected class is not, alone, enough to show discrimination. Further, insurance entities are exempt from the discrimination prohibitions if they are already subject to insurance laws and regulations prohibiting unfair discrimination, competition, or acts in the business of providing insurance. Federally insured financial institutions also will be considered to be in compliance with the Act if they comply with all federal and state banking laws and regulations. This approach narrows the scope of potential liability compared to broader anti-bias rules in other jurisdictions, including the definition of “algorithmic discrimination” in the Colorado AI Act.
Disclosures and Documentation
Notice to consumers is required only for government agencies and healthcare services. Specifically, a governmental agency that makes available an AI system designed to interact with consumers must make a clear and conspicuous disclosure to consumers, before or at the time of the interaction, that the consumer is interacting with an AI system, even if the interaction would be obvious. A provider of healthcare services or treatment for humans by a licensed, registered or certified provider must make a clear and conspicuous disclosure to the recipient of the service or treatment (or their personal representative) that the recipient is interacting with an AI system no later than the date on which the service or treatment is first provided or, in the case of an emergency, as soon as reasonably possible.
Outside of these requirements and the requirements for participants in the regulatory sandbox (discussed below), obligations for documentation and impact assessments are sparse, especially compared to other AI laws such as the Colorado AI Act, the EU AI Act, state comprehensive privacy laws regarding automated decisionmaking, and the NYC Automated Employment Decision-Making Tools ordinance. Note, however, that the Texas AG can require detailed information from a developer or deployer as part of an AG investigation. Therefore, documentation and assessments remain an important part of good governance, even in Texas.
Innovation and Oversight
The Act establishes a regulatory sandbox program to support experimentation and testing of AI systems. This innovation-forward feature echoes regulatory sandboxes in both the EU and U.S., aiming to balance safety with economic development. The Texas Act has detailed and substantial requirements for applying for and participating in the sandbox program, including requirements for disclosure of the systems and its intended use, the potential impacts on consumers, privacy and public safety, plans for mitigating any adverse consequences that may occur during testing, and proof of compliance with applicable federal AI laws and regulations, as well as periodic reporting on metrics, risk mitigation and stakeholder feedback on the AI system. Program participants enjoy a safe harbor from AG charges under the Act during participation, but a participant can be removed from the program on a finding that the AI system poses an undue risk to public safety or welfare, violates federal law or regulation, or violates state law or regulation not waived under the program. Participation is limited to 36 months subject to extensions for good cause. The Texas Department of Information of Information Resources is tasked with submitting an annual report that, among other things, discusses the overall performance and impact of the AI systems tested in the sandbox program and makes recommendations on changes to laws or regulations. To support ongoing policymaking and public understanding of AI, the Act also establishes the Texas Artificial Intelligence Council to advise the legislature on AI-related issues and authorizes programs to study the societal impact of AI technologies.
Enforcement and Penalties
The Texas Attorney General and state agencies have exclusive authority to enforce the Act, as it creates no private right of action.
Notable features of the enforcement mechanism include:
- Online complaint mechanism. The Act tasks the AG with creating and maintaining an online complaint mechanism on its website so that consumers can submit complaints.
- Investigative Demands. Although, unlike the Colorado AI Act and the EU AI Act, the Act does not have robust reporting, documentation and impact assessment requirements for developers and deployers outside of the sandbox program, the AG can issue an investigative demand to an organization that is the subject of a complaint. The demand can require a high-level description of the purpose, intended use, deployment context and associated benefits of the AI system; a description of the data used to program or train the AI system; a high level description of the categories of data processed as inputs for the system; a high level description of the outputs produced by the system; any metrics used to evaluate the performance of the system; any known limitations of the system; a description of the post-deployment monitoring and user safeguards; and “any other relevant documentation reasonably necessary for the AG to conduct an investigation.”
- Notice-and-cure requirement: Those found to have violated any provision of the Act must receive written notice and a 60-day opportunity to cure violations—apparently even in cases involving serious misconduct like discrimination or CSAM creation.
- Civil penalties only: The Act allows for monetary fines, injunctions, and reimbursement of court and investigation costs, but not criminal penalties.
- Safe harbors: The Act provides that a person will not be held liable if a prohibited use was committed by another party using a compliant system, and it establishes a presumption of “reasonable care” for entities complying with the NIST AI Risk Management Framework Generative AI Profile.
Final Thoughts
The Texas Responsible AI Governance Act signals a distinctively Texan approach to regulating artificial intelligence—limited in scope, innovation-friendly, and aligned with conservative legal trends. While government use of AI will be subject to increased scrutiny under the Act, private-sector actors will find relatively few immediate compliance burdens, particularly outside of healthcare contexts.
Nonetheless, developers and deployers operating in Texas should watch closely as regulatory clarity evolves, especially given the vague definitions and early-stage sandbox framework. For businesses leveraging AI nationwide, this law adds another layer to the fragmented U.S. AI regulatory landscape that will require careful, jurisdiction-specific planning.