States are playing a critical role in the development of AI policy in the United States, as serious legislative efforts at the federal level remain stalled. The Trump Administration and Republican Congress have signaled they plan to take a business-friendly, light-touch approach to AI regulation that prioritizes American innovation and emphasizes competition with China, and there is little prospect of Congress moving a substantive AI bill in the current political climate. Instead, states are moving to fill the federal void by taking up a flurry of AI bills to ensure that—amid AI innovation—regulators also create “guardrails” around uses of AI and that consumers can identify AI-generated content. Much like state legislatures, state attorneys general will likely focus on AI issues as they intersect with consumer protection concerns, including privacy.
This alert focuses on the major actions that have occurred thus far in 2025 and how we expect the state regulatory landscape to act as the laboratory shaping the rules around AI innovation and safety in the year ahead.
I. Federal Regulation
Federal legislative action on AI during the Trump Administration has been limited and largely focused on deregulating and streamlining AI development, including by counteracting state efforts at regulation. After bipartisan opposition to a provision in the House’s One Big Beautiful Bill Act that would have prevented states from enforcing or enacting laws regulating AI models, the Senate Commerce Committee took a more subtle, but similarly-intentioned route—proposing to condition certain federal funds on states’ agreement not to enact AI regulations. This so-called “AI moratorium” proved to be controversial and made for some strange political bedfellows—for instance, Colorado Governor Jared Polis (D), Senator Ted Cruz (R), and the US Chamber of Commerce all supported the moratorium, while Senators Ed Markey (D), Rep. Marjorie Taylor Greene (R), Florida Governor Ron DeSantis (R), and Arkansas Governor Sarah Huckabee Sanders (R) all opposed it. After a climactic series of events, the Senate ultimately voted (99-0) in early July to remove the AI moratorium from its version of the Act, which was ultimately signed into law by President Trump (without the moratorium). Although some Senators have discussed trying to move a standalone “AI moratorium” bill, the previous bipartisan opposition suggests it is unlikely to proceed.
A more likely source of conflict between the federal government’s deregulatory push and states’ interest in creating safeguards on AI use and development comes from the recent Trump Administration executive actions on AI. On July 10, the Trump Administration issued its AI Action Plan, which calls for the removal of “red tape and onerous regulation” and in doing so revives aspects of the AI moratorium to curtail state regulation. In full, the Action Plan notes that “[t]he Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” The Plan does not specify the criteria for such laws, but does direct the United States Office of Management and Budget (“OMB”) to ensure that federal agencies with discretionary funding to take into consideration “a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” This provision could theoretically have a chilling effect on states—like California and New York—that are considering legislation that could impose regulations on developers and deployers of AI.
The AI Action Plan also calls for the FCC to evaluate whether state AI regulations “interfere with the agency’s ability to carry out its obligations and authorities” under the Communications Act of 1934. 47 U.S.C. § 151 et seq. This suggests that the Administration may argue that state AI regulations are preempted by the FCC. However, the boundary between the FCC’s regulatory authority and that of the states has been a source of ongoing dispute: in 2016, for example, an appeals court overturned Obama-era efforts to preempt laws in North Carolina and Tennessee restricting municipal broadband networks. These overlapping sources of authority could give rise to complex litigation around the FCC’s authority to preempt state AI legislation.
Finally, the AI Action Plan does call for one proactive role that states can play in the AI landscape: to rapidly retain workers who are displaced or impacted by AI. The Action Plan calls on the US Department of Labor and US Department of Commerce to leverage discretionary funds for this purpose and to work with states to pilot effective workplace training programs.
Along with the AI Action Plan, the Administration issued three executive orders concerning AI development. As discussed in Jenner & Block’s recent client alert, the executive orders seek to shift federal policy towards a more streamlined and permissive regulatory environment for AI development and infrastructure. The Plan identifies three key initiatives to (1) accelerate AI innovation, (2) build American AI infrastructure, and (3) advance US leadership in international AI diplomacy and security. The overarching theme is that the federal government wants to remove regulatory barriers to ensure American dominance in AI—including regulatory barriers established through state regulation.
II. State Regulation
Given the Trump Administration’s deregulatory push, and limitations on the executive branch’s ability to act alone in restricting state regulation, states are likely to remain the primary source of new AI regulation in the near future. In the past few years, AI legislation has increasingly become the “it” topic in statehouses around the country. In 2024, 45 states introduced AI bills touching on various regulatory issues, including deepfakes, election security, watermarking, and child pornography. Colorado enacted a landmark bill requiring that developers and deployers exercise reasonable care to protect consumers from risks of algorithmic discrimination in certain situations. And last fall, California enacted a significant package of AI legislation consisting of 17 bills covering the deployment and regulation of generative AI. In the absence of comprehensive federal action, states are enacting a patchwork of AI regulation that will create uneven compliance burdens for organizations developing or deploying AI, which will need to track and closely monitor the changing regulatory landscape to ensure compliance with new state law.
At this point in the legislative cycle in 2025, more than a thousand AI bills have collectively been filed across all fifty states. And many bills have bipartisan co-sponsors. The majority of states have wrapped up their legislative sessions, although a few heavy-hitting states like California still have significant AI legislation in the works.
The key bills from 2024 and 2025 represent state legislators’ growing concern with AI safety, deployment of AI, and AI transparency. Specifically, lawmakers in states from around the country have introduced important bills to regulate AI models, AI uses, watermarking or labeling of AI-created content, AI transparency, deepfakes, copyright, and model training data, and state investment in AI. Several key categories include:
- Bills regulating AI models create safety and risk management rules for the development, training, and operation of frontier AI models. For example, New York’s Responsible AI Safety and Education (RAISE) Act, which requires large AI developers to have safety plans to protect against widespread harm, passed in June and is currently awaiting Governor Hochul’s signature, while Senator Wiener’s new safety bill (the Transparency in Frontier Artificial Intelligence Act, S.B. 53) unanimously passed the California Senate and is currently pending in the Assembly.
- Bills focused on “high-risk” AI deployment regulate the ways AI can be used by businesses and consumers, imposing disclosure and other requirements when AI is used in “high-risk” contexts. This subject in particular has drawn the interest of state legislatures, especially regarding the ways AI can be used to make consumer-facing decisions relating to healthcare, employment, insurance, and housing. The Colorado AI Act, passed in 2024, has been a model for other states seeking to enact comprehensive AI regulation—although to date, none of the follow-on bills have been enacted into law and, in fact, the Colorado Legislature is rumored to be considering extending the compliance date of that law. Virginia’s H.B. 2094 came close but was vetoed by Governor Youngkin in late March. Texas’s sprawling safety bill (H.B. 1709) also garnered much attention but ultimately did not pass this session. Colorado’s law goes into full effect in February of 2026, so companies should prepare for compliance. There will likely be pressure put on states to how closely to Colorado’s model to avoid different compliance obligations in all fifty states. Any fixes or improvements to this model will likely be made first in Colorado and then flow out to other states.
- Watermarking bills require the inclusion of digital watermarks or content provenance information in specific types of content, to identify the origin of the material and help distinguish it from human-created content. Washington and other states have introduced bills modeled off of California’s AI Transparency Act, which was enacted last year. So far, California is the only state to enact this type of legislation, with the AI Transparency Act coming into full effect on January 1, 2026.
- AI transparency bills require notifying consumers when interacting with AI, such as with a chatbot. For example, Utah S.B. 226, which was enacted in March, requires that companies using AI to interact with consumers disclose the use of AI in certain circumstances. Multiple states have enacted this type of law this session: Maine, Montana, New Jersey, North Dakota, and Utah.
- Deepfake bills create rules and penalties for creating AI-generated content that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. While some examples of this type of legislation are broader, states like Florida and Texas have enacted bills modeled off of the federal TAKE IT DOWN Act, which President Trump signed earlier this year. Numerous states enacted this type of legislation this session, including Arizona, Arkansas, Colorado, Connecticut, Florida, Illinois, Kansas, Kentucky, Maine, Maryland, Montana, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Washington, and West Virginia.
- Digital replica bills, like Maryland’s H.B. 1407, protect rights holders from having their voice or likeness replicated using AI in an unauthorized manner. So far, Arkansas, Montana, Pennsylvania, Tennessee, and Washington each passed this type of law this session.
- Model training bills, such as California’s A.B. 2013, create rules or disclosure requirements about the data used to train an AI model. To date, Arkansas and Texas were the only states to pass this type of law this session.
III. Future Predictions
Despite federal efforts to slow down state policymaking, we expect states to continue actively legislating on AI, particularly majority-Democratic states like California and New York that have been churning out (and in some cases passing) AI bills and where the Trump Administration’s deregulatory efforts may only galvanize additional action. Proposed legislation moving forward will likely mirror the topics of the legislation passed and proposed throughout 2024 and 2025, including regulating deepfake content, disclosures and transparency, and watermarking and content provenance requirements. As AI regulations passed in other jurisdictions—like the E.U.’s AI Act and Colorado AI Act—begin to take effect, legislators in other states may use those existing regulations on high-risk deployment, transparency, and more as models upon which to build similar legislation.
With state legislation potentially under fire from the Administration, we also expect state attorneys general and state regulatory agencies to play an increasingly prominent role in AI-related legislation and investigations. For example, California Attorney General Rob Bonta has advocated for increased scrutiny of AI commercial activity, particularly with respect to consumer protection, data privacy, and healthcare. In January, he issued two legal advisories reminding consumers of their rights and advising businesses and healthcare entities about their obligations under California law. Nor are Blue State attorneys general the only ones interested in alleged misuses of AI. In September, for example, Texas Attorney General Ken Paxton announced that his office had settled an investigation into an AI healthcare company related to deceptive claims about its AI healthcare products.
We expect these investigations by state attorneys general to grow in scope and frequency as consumers interact more regularly with AI products and services. Furthermore, as AI continues to advance and consumers interact with AI more frequently, we expect both state regulators and state legislators to continue to be active in passing new regulations—especially if federal inaction persists. And in the event the federal government seeks to restrict state AI regulation, it is likely that states would fight these efforts in court.
[View source.]