White House Releases America’s AI Action Plan

Wilson Sonsini Goodrich & Rosati

On July 23, 2025, the White House announced its long-awaited comprehensive AI Action Plan titled “Winning the AI Race: America’s AI Action Plan” (the Plan). The Plan is aimed at positioning the U.S. as the global leader in AI and is a follow up to President Donald Trump’s January 23, 2025, Executive Order on “Removing Barriers to American Leadership in Artificial Intelligence,” which revoked the Biden Administration’s prior AI Executive Order (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence). The AI Action Plan contains more than 90 policy actions related to three key pillars: 1) Accelerating AI Innovation, 2) Building American AI Infrastructure, and 3) Leading in International AI Diplomacy and Security. This alert touches on all three pillars with a focus on the first, which outlines the Trump Administration’s strategic vision and policy recommendations to drive innovation in the American AI sector.

Some Key Takeaways

  • Despite the uncertain fate of a federal moratorium on new state AI regulation,1 the Administration proposes withholding federal funding to states with burdensome AI regulations, in a bid to discourage broad (and potentially overlapping or inconsistent) AI regulation by states. The Plan also suggests the Federal Communications Commission (FCC) may more directly attempt to challenge some state AI regulations.
  • The Plan calls for the repeal of federal regulations that would stifle AI innovation.
  • The Plan calls on the Federal Trade Commission (FTC), not only to review and reassess certain ongoing AI-related investigations from the Biden Administration, but also to review and reassess final orders, including mutually agreed-upon settlements, that burden AI innovation. It is rare for the FTC to review final orders; companies under FTC order should consider the implications of this White House directive.
  • The Plan directs the expansion of export controls on semiconductor manufacturing components and signals stricter enforcement, extending U.S. oversight on foreign-made items that rely on U.S. technology; it also promotes coordination with allies to align on AI export controls. It also may signal new forthcoming limitations on U.S. companies’ use of certain foreign AI products and models.
  • The CHIPS Program Office and other grant programs can be expected to streamline funding requirements to accelerate semiconductor production and the integration of AI tools into semiconductor manufacturing.
  • Notably, even though the Plan does not directly address copyright, in his remarks unveiling the Plan, President Trump derided stringent copyright enforcement efforts related to the training of AI models using third-party content, suggesting that those efforts would hinder U.S. companies trying to compete against China. He stated that “[y]ou can’t be expected to have a successful AI program when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for. You can’t do it because it’s not doable… China’s not doing it.”

Pillar I:

Removing Burdensome AI Regulations

To ensure that the U.S. maintains its leadership in AI, the Plan emphasizes removing “red tape and onerous regulations” focused on three areas: 1) state regulations on AI, 2) regulations affecting AI innovation, and 3) previous FTC actions.

  1. State Regulations: The Plan specifies that federal funding should not be directed toward states with “burdensome AI regulations” while respecting states’ right to enact “prudent laws that are not unduly restrictive to innovation.” The Plan does not elaborate on what regulations would be considered “burdensome” versus “prudent,” and it does not specify which federal funds could be withheld. A number of states have already enacted AI legislation governing safety and high-risk uses of AI, including the Colorado AI Act, New York’s recently-enacted AI companion safeguards law, and the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), to name a few.

    The Plan includes a broad mandate that federal agencies consider state AI regulatory climates while allocating funding, tying federal funding to evaluations of state regulatory environments. For example, the Plan recommends that the Office of Management and Budget (OMB) “work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.”

    In addition to using federal funding as a tool to move state policymaking on AI, the Plan suggests more direct challenges may be mounted by the FCC. The FCC is called to assess whether state AI regulations interfere with the agency’s authorities under the Communications Act. Implicitly, the Plan suggests that the agency may pursue preemptive regulation or litigation in the event that the FCC finds the opportunity to challenge state action.

  2. Regulations Affecting AI Innovation: The Plan calls on the Office of Science and Technology Policy (OSTP) to seek public input on current federal regulations that hinder AI innovation. It also tasks OMB to work with federal agencies to identify and eliminate unnecessary regulations. The FTC has already begun a broad effort, launching a Request for Information into any anti-competitive regulations. The comment period closed on May 27, 2025.
  3. FTC Investigations: A key policy recommendation to facilitate deregulation includes reviewing all FTC investigations from the previous administration “to ensure that they do not advance theories of liability that unduly burden AI innovation.” Further, the Plan recommends that the FTC “review all final orders, consent decrees, and injunctions, and, where appropriate, seek to modify or set-aside any that unduly burden AI innovation.”

    The effect of such a review could be extensive. The Biden Administration took an expansive approach to addressing the implications of AI, including competition inquiries into the partnerships between major tech companies and AI start-ups. It also announced a number of settlements in the AI space, against companies in the generative AI, retail, security, ecommerce, social media, and facial recognition areas, among others. Some of these cases are likely to be re-examined under the Action Plan directive.

    The broad mandate to review past theories of liability and consent decrees may point to the Trump administration’s enforcement approach for mergers or partnerships that could fuel AI development. The impact for companies developing or investing in AI technologies remains to be seen.

AI Adoption Through a “Try-First” Culture

The Plan aims to address the slow adoption of AI in critical sectors, such as healthcare, by encouraging a “try-first” culture for AI. Notably, the Plan recommends that agencies like the Securities and Exchange Commission (SEC) and the U.S. Food and Drug Administration (FDA) establish regulatory sandboxes “where researchers, startups, and established enterprises can rapidly deploy and test AI tools while committing to open sharing of data and results.” In light of the increased adoption of AI to boost productivity and streamline operations, this is another significant sign of the Trump Administration’s enthusiasm for a hands-off approach to AI regulation.

To further address the problem of slow adoption of AI within critical sectors, the Plan recommends launching domain-specific initiatives (e.g., in healthcare, energy, and agriculture) to develop national standards for AI and calls for regularly updating assessments of AI adoption within national security contexts and prioritizing intelligence collection on foreign AI projects with potential national security implications.

Prioritizing Open-Source and Open-Weight AI

The Plan emphasizes the promotion of open-source and open-weight2 AI models to enhance innovation and support AI commercialization, government adoption, and academic research. To support this vision, the Plan recommends several key actions, including ensuring that startups and academics have access to large-scale computing power by improving the financial market for computing resources, partnering with technology companies to enhance access to private sector computing and data, and establishing a robust operational capability for the National AI Research Resource (NAIRR) to connect researchers with AI resources. The Plan also calls for publishing a new National AI Research and Development Strategic Plan to guide federal AI research investments, to be led by OSTP, and convening stakeholders through the National Telecommunications and Information Administration to encourage the adoption of open-source models among small and medium-sized businesses.

“Woke” AI as a Threat to Free Speech

The Plan acknowledges that AI systems will significantly influence future education, work, and media consumption and states that it is therefore crucial that AI systems are “built from the ground up with freedom of speech and expression in mind.” To achieve this, the Plan recommends that the Department of Commerce (DOC), through the National Institute of Standards and Technology (NIST), revise its AI Risk Management Framework to remove references to what the Trump Administration has characterized as “woke” social ideology around diversity, equity and inclusion, climate change, and misinformation. Further, the Plan states that federal procurement guidelines should require that contracts be awarded only to developers of large language models who “ensure” that their systems are “objective and free from top-down ideological bias.” The Plan also recommends that the Department of Commerce evaluate AI models from China to assess their alignment with any Chinese Communist Party narratives and censorship practices.

Increased Investment in AI-Enabled Science

The Plan recognizes AI’s potential in promoting scientific development and emphasizes that progress will require scaling of experiments. To support scaling efforts, the Plan recommends developing new infrastructure through public-private collaboration. Further, the Plan recommends supporting the use of AI to make scientific developments through “new kinds of scientific organizations” and contemplates tying federal support of scientific projects with the release of researchers’ nonproprietary, nonsensitive datasets. The Plan highlights high-quality scientific datasets as a national priority and recommends the development of minimum data quality standards.

Elevate AI by Pioneering Interpretability and Control Innovations

The Plan emphasizes the importance of supporting theoretical, computational, and experimental research to foster new breakthroughs that can significantly enhance AI capabilities. Further, the Plan acknowledges the challenges of understanding how AI systems function and predicting their outputs and recommends launching a technology development program led by the Defense Advanced Research Projects Agency to improve AI interpretability and control and prioritizing these advancements in the upcoming National AI Research and Development Strategic Plan.

Build Robust AI Evaluations Ecosystem

The Action Plan calls for the establishment of an AI evaluations ecosystem to assess the performance and reliability of AI systems, including a call for regulators to “[o]ver time… explore the use of evaluations in their application of existing law to AI systems.” The Plan also recommends publishing guidelines and resources through NIST for federal agencies to conduct their own evaluations for their distinct missions and for compliance with existing law. Further, the Plan proposes investing in AI testbeds for piloting AI systems in secure, real-world settings and establishing a new measurement science to promote AI development.

Combating Synthetic Media in the Legal System

The Plan points to the challenges associated with AI-generated media, including the creation of non-consensual intimate imagery and false evidence. The Plan celebrates the enactment of the TAKE IT DOWN Act in May 2025, which, among other things, criminalized the intentional publication of nonconsensual intimate imagery. To address the challenges posed by false evidence for the courts, the Plan recommends that the U.S. Department of Justice issue guidance to agencies regarding a potential “deepfake standard” as well as file formal comments on deepfake-related proposals to the Federal Rules of Evidence.

Pillar II:

Boosting American AI Infrastructure

Pillar II of the Plan seeks to bolster the country’s AI infrastructure and ensure AI dominance, particularly relative to China. The Plan presents eight primary aspects of this strategy. Several initiatives focus on building American AI infrastructure, including creating streamlined permitting for AI infrastructure, developing an electric grid and generation mix to support AI development, reshoring semiconductor manufacturing, and training those in occupations supporting AI infrastructure. The Plan calls out enhanced geothermal, fission, and fusion as important sources of energy generation, and also cautions against prematurely decommissioning older generating plants during the present period of AI-fueled demand for electricity.

The Plan also presents initiatives focused on protecting the security of AI infrastructure. The recommended strategies  include a continued emphasis on supply chain security by ensuring that energy and telecommunications infrastructure is free from foreign adversary inputs; the development of AI-related security standards and guidance, including technical standards for high-security data centers as well as guidance for incident response and for addressing AI-specific vulnerabilities and threats; promoting secure-by-design technologies and applications; and promoting information sharing for purposes of identifying and responding to AI-specific risks.

Notably, the Plan recommends that federal agencies share known AI vulnerabilities with the private sector through existing mechanisms and that the U.S. Department of Homeland Security (DHS) share guidance with the private sector on remediating and responding to AI vulnerabilities and threats. The Plan also proposes the creation of an AI Information Sharing and Analysis Center, led by DHS, to encourage sharing AI threat information across critical infrastructure sectors. The Plan also promotes secure-by-design AI technologies and applications to protect against adversarial threats, like data poisoning and privacy attacks, particularly for applications critical to national security.

Finally, the Plan emphasizes the need for a robust federal capacity to respond to AI-related incidents, to ensure that potential failures of AI systems do not disrupt critical services or infrastructure. It recommends integrating AI incident response actions into existing response doctrine and best-practice protocols for both public and private sectors. To that end, the Plan recommends partnering with AI and cybersecurity industries to establish standards and best practices for incident response, updating the Cybersecurity and Infrastructure Security Agency's response playbooks to include AI considerations, and promoting the “responsible sharing of AI vulnerability information” among federal agencies and stakeholders.

Pillar III:

Enhanced Use of National Security and Trade Controls

The Plan outlines a broad strategy by the U.S. government to use various trade control mechanisms in order to promote the use of domestic solutions within the U.S. and by allies, and to limit the dispersal of solutions built by nations that may be geopolitical adversaries. Most prominently, the Plan tightens export controls on semiconductors and advanced AI compute to prevent their misuse by “countries of concern,” a term often appearing in export control and national security-related regulations and executive orders to refer to China, North Korea, Russia, and Iran, among others.

The DOC will lead efforts to develop new controls on the components for semiconductor manufacturing subsystems, which are not currently covered by the existing rules, and enhance end-use monitoring capabilities globally by leveraging location verification technology on advanced AI chips. This measure aims to close enforcement loopholes and prevent diversion of U.S.-origin AI compute to the “countries of concern” discussed above, to reduce the risk that sensitive technologies land in the hands of adversaries through indirect supply chains or jurisdictions lacking similar controls. This is a further expansion of existing government initiatives along similar lines; as discussed in greater depth in our prior alerts here and here, the DOC has continued to add new export controls for certain advanced computing and semiconductor manufacturing items.

Relatedly, the U.S. Department of State is directed to support the DOC on the strategic front by developing a diplomacy plan to align allied countries with U.S. export controls across the supply chain. Companies operating in multiple jurisdictions should anticipate heightened compliance burdens and exposure for the direct and indirect use of the U.S.-origin technology.

In addition, the plan implicitly suggests that DOC should use other tools, such as the Information and Communications Technology and Services (ICTS) rules, to limit the use of foreign AI by domestic businesses. The ICTS rules permit the U.S. government to review and condition or prohibit the use of certain technologies of concern by U.S. companies. The Plan suggests that in certain industries—such as energy and telecommunications—the Trump administration may use ICTS orders to ensure limited adoption of foreign AI models and software from countries of concern such as China.

Finally, industry consortia will soon be invited to propose “full-stack AI export packages” under a new DOC program; the packages will likely combine AI-related hardware, software, technology and services. Selected proposals will receive interagency support to pursue international deals that align with U.S. security standards. The goal is to promote trusted, U.S.-origin AI systems globally as a competitive alternative to adversarial technologies, while ensuring exports are tightly controlled. For industry, this presents a commercial opportunity that will likely be coupled with heightened compliance expectations and require careful coordination across jurisdictions.

Government Support for Growth in Semiconductor Manufacturing and Defense-Oriented AI

The Plan calls for the DOC’s CHIPS Program Office to streamline regulations to accelerate semiconductor production and integration of AI tools. Signed into law in 2022, the CHIPS Act offers, among other things, funding for the development of facilities to research, manufacture, and produce semiconductors and semiconductor-related materials and equipment. Many aspects of the current CHIPS federal financial assistance process are complex. For example, applicants historically were required to meet an extensive set of eligibility criteria, as previously outlined here. The Plan directs the CHIPS Program Office to “remov[e] all extraneous policy requirements for CHIPS-funded semiconductor manufacturing projects,” suggesting that this eligibility criteria may be relaxed to facilitate broader access to this funding. The Plan also calls for the DOC to review its other semiconductor grant and research programs to speed up the adoption of advanced AI tools into the semiconductor manufacturing process.

The U.S. Department of Defense is also directed to continue advancing secure AI adoption internally, including through building AI data centers, and to coordinate with DOC to keep adversaries out of U.S. and allied defense supply chains.

Conclusion

The White House’s AI Action Plan outlines a clear directive for tech policy under the second Trump administration. The Plan makes clear that deregulation, private sector leadership, and rapid development will guide the administration’s policy on AI growth.


[1]On July 1, 2025, the U.S. Senate voted 99-1 to remove a proposed federal moratorium on new state AI regulations from H.R.1 (One Big Beautiful Bill Act), now enacted by Congress and signed by President Trump.

[2]Open weights generally refers to releasing only the pretrained parameters (or “weights”) of the model. Open-weight AI differs from open-source AI in that it does not include training code, original dataset, model code or architecture details, or training methodology that truly open-source AI might provide.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Wilson Sonsini Goodrich & Rosati

Written by:

Wilson Sonsini Goodrich & Rosati
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Wilson Sonsini Goodrich & Rosati on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide