Texas has become one of the first U.S. states to enact comprehensive legislation governing artificial intelligence with the passage of the Texas Responsible Artificial Intelligence Governance Act (TRAIGA).
Signed into law by Governor Greg Abbott on June 22, 2025, and effective Jan. 1, 2026, the statute establishes a foundational framework for how AI systems may be developed, deployed and used within the state. While the bill initially mirrored broader regulatory models like the EU AI Act, its final version reflects a more targeted approach, focusing heavily on governmental use of AI and certain high-risk private-sector practices. (H.B. 149 Full Text). To note, this article will focus mostly on requirements imposed on businesses, as opposed to governmental entities.
Who Is Covered
TRAIGA applies to any person or entity conducting business in Texas or offering AI products or services to Texas residents. This includes developers, deployers and distributors of AI systems. However, small businesses — defined consistent with the U.S. Small Business Administration’s thresholds — are exempt.
Key Prohibitions and Restrictions
The Act sets forth explicit prohibitions on certain uses of AI, particularly those considered to pose unacceptable risks, much like the EU AI Act. These include systems designed to manipulate individuals through subliminal techniques, incite violence or criminal activity, or intentionally discriminate based on race, religion, disability or other protected categories. It also bans the development and use of AI systems to produce or disseminate non-consensual sexually explicit deepfakes.
Unlike the EU AI Act, and earlier versions of TRAIGA, the law as signed does not cover “high-risk” AI systems – commonly understood to be use cases that deal with confidential data or certain types of decision making, like healthcare, employment and financial determinations.
Safe Harbors
TRAIGA does provide certain safe harbors for actions that would otherwise be considered violations, including if the violation is discovered through testing or good faith audits, or if the company substantially complies with NIST’s AI Risk Management Framework or other similar and recognized standards.
Enforcement and Penalties
TRAIGA vests exclusive enforcement authority with the Texas Attorney General. In some contexts, enforcement may also be handled by state licensing agencies. Before any penalties can be imposed, alleged violators must be given a 60-day notice and an opportunity to cure the issue. Depending on the type of violation, the fines range from $10,000 to $200,000 per violation. Notably, the statute does not provide for a private right of action, meaning that consumers or other third parties cannot sue for violations.
Regulatory Sandbox and Advisory Council
In a nod to innovation, the law establishes a regulatory sandbox managed by the Texas Department of Information Resources. This allows companies or agencies to test new AI technologies under controlled conditions for up to three years, provided they meet eligibility and compliance criteria. TRAIGA also creates an Artificial Intelligence Advisory Council tasked with studying AI-related risks, supporting training and issuing formal opinions to help guide public-sector compliance.
A Calibrated Path Forward
TRAIGA’s legislative path reflects a deliberate scaling back of broad regulatory ambitions in favor of a more pragmatic framework focused on state agencies and the most egregious private-sector risks. Earlier drafts had included expansive obligations for all AI developers and deployers, ranging from consumer notices to liability for disparate impacts. These provisions were ultimately removed or narrowed, likely to ease compliance burdens and avoid overregulation.
In its final form, TRAIGA balances public protection with innovation-friendly policies. It curtails specific high-risk uses, particularly in the public sector, while signaling that broader private-sector regulation may be forthcoming. For now, the law establishes Texas as a leader among U.S. states in AI governance, especially in setting standards for ethical deployment by government actors.
As the January 2026 effective date approaches, legal and compliance teams — especially those supporting state agencies or vendors to the public sector — should begin reviewing AI deployments, updating disclosures and preparing documentation protocols. Meanwhile, all developers and deployers operating in Texas should monitor how TRAIGA is implemented in practice and anticipate additional rulemaking from the Department of Information Resources and advisory input from the newly created Council. Whether this state-level law will preempt or influence future federal legislation remains an open question — but for now, Texas has taken a firm step forward in defining responsible AI governance.