
[co-author: Stephanie Kozol]*
One of many provisions in the “One Big Beautiful Bill Act,” passed by the U.S. House of Representatives, would place a 10-year “temporary pause” on states’ ability to regulate artificial intelligence (AI). Initially called a moratorium, Senate Republicans changed the characterization of the prohibition to ensure the provision’s passage during the reconciliation process. The changes were at least partially successful, as the proposed “temporary pause” overcame a procedural hurdle when the Senate parliamentarian concluded that it satisfies the “Byrd Rule” and may remain in the bill. The bill now heads to the Senate floor. If enacted, the temporary pause would mark the most significant federal action (or inaction) related to AI.
The Proposed Temporary Pause
Contained within the One Beautiful Bill Act is a provision that would prevent a state or locality from enforcing any law or regulation that specifically targets AI models, AI systems, or automated decision systems. The prohibition would last for 10 years after the enactment date. If states decide to enforce AI regulations in violation of the provision, the federal government may withhold broadband grants under the Broadband Equity, Access, and Deployment (BEAD) program. The bill’s language specifies three categories of legislation that would be preempted:
- AI models: “[A] software component of an information system that implements [AI] technology and uses computational, statistical, or machine-learning techniques to produce outputs from a defined set of inputs.”
- AI systems: “[A]ny data system, hardware, tool, or utility that operates, in whole or in part, using [AI].”
- Automated decision systems: “[A]ny computational process derived from machine learning, statistical modeling, data analytics, or [AI] that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision making.”
The breadth of these definitions is purposeful, in part because AI is difficult to define and because the bill’s drafters desire to preempt as much state regulation of AI as possible.
The temporary pause is not without exceptions, as found in Paragraph (2) of the provision. Laws with the primary purpose and effect of “remov[ing] legal impediments to, or facilitate[ing] the deployment or operation of [AI],” or “streamlin[ing] licensing, permitting, routing, zoning, procurement, or reporting procedures in a manner that facilitates the adoption of [AI]” are permitted. Laws regulating AI that impose a fee or bond are allowed so long as the fee or bond is reasonable, cost-based, and regulates AI in the same manner as non-AI systems.
Further, states are allowed to pass laws that do not impose any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement on AI, unless such requirement is imposed under federal law or the requirement applies in the same manner as non-AI systems. A significant question going forward is whether the prohibition applies only to laws targeted toward AI regulation or to general laws incidentally affecting AI systems; this final exception seems to imply that states may regulate AI through existing general laws if the regulation applies to non-AI systems in the same way. Indeed, since early 2024, several state attorneys general have signaled that they will enforce consumer protection, privacy, and anti-discrimination laws against companies utilizing AI systems that potentially violate such laws.
Notably, state regulators and federal legislators on both sides of the aisle have objected to the 10-year “temporary pause.” In May, a bipartisan group of 40 state attorneys general sent a letter to Congress voicing their objections to the proposal as violative of state sovereignty and their efforts to protect consumers. And on June 18, a bipartisan group of U.S. senators and state attorneys general held a press conference to express their opposition, noting that the provision would hinder consumer protection efforts. The group included Republican Sen. Marsha Blackburn and Attorney General Jonathan Skrmetti from Tennessee, and Democratic Senator Maria Cantwell and Attorney General Nick Brown from Washington.
Embracing their role as the laboratories of democracy, states have been at the forefront of developing legislation to combat AI abuses. Thus far, the absence of superseding, federal AI governance legislation has allowed states to fill the void. Four states, California, Colorado, Texas, and Utah, have enacted forms of AI governance laws with dozens of other states considering similar legislation. And in March, Tennessee legislators adopted the “Elvis Act” to prohibit the duplication or mimicry of music industry professionals’ voices through AI. Skrmetti, in commenting on the states’ role in AI regulation, noted that while “technology moves fast, unfortunately the federal government does not.” Significantly, the proposed federal provision would effectively nullify these state laws, unless a state is willing to forgo the aforementioned broadband funding.
In support of the provision, however, Republican Speaker of the House Mike Johnson of Louisiana has expressed concern over allowing states to trailblaze in the field of AI regulation. While Johnson describes himself as “a fierce guardian of states’ rights and federalism,” he believes this issue should be addressed at the federal level. Although no federal standard is yet in place, he claims “[i]t would be a very dangerous thing, we feel, for all 50 states to have a patchwork of regulations on AI.” Not only is Johnson concerned about the volume of AI legislation and potential conflicts it may create, but he is also concerned that certain states will set the national standard. He specifically expressed concern about California, claiming that its tendency to “hyperregulate[]” would “stifle innovation.” Other supporters of the provision have generally noted that one law is much easier to comply with than 50 separate laws.
Why It Matters
Considering AI’s countless use cases, cheap operating costs, and ease of use, it is a natural consequence that the technology has fundamentally changed the operations of many businesses. And given this proliferation, federal and state governments will assuredly continue to address the tension of allowing for innovation and enacting an appropriate level of regulation. The adoption of the federal, 10-year temporary pause would prevent states from enforcing their AI-specific laws to prevent abuses. While this would hinder states in their ability to enforce laws that specifically address the unique issues that arise with AI, state regulators will continue to scrutinize AI use under existing consumer protection, privacy, and anti-discrimination laws. Certainly, businesses utilizing AI should keep a close eye on the federal provision’s passage, as it will fundamentally alter this technology’s future use. In the meantime, however, companies should ensure they are employing defensible AI policies and practices to mitigate regulatory exposure under existing state laws.
*Senior Government Relations Manager