The Trump administration revealed its long awaited AI Action Plan (the plan) on July 23, ordering the federal government to accelerate the development of artificial intelligence (AI) in the United States and “remove red tape and onerous regulation” while ensuring that AI is free of “ideological bias.” The plan’s epigraph, signed by President Trump, states “it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance.” Doubling down on the national security, economic, and trade competition contexts, the introduction of the plan, which is titled “Winning the Race,” states that “the United States is in a race to achieve global dominance in AI” and calls for the US to win this race “just like we won the space race.”
The plan presents AI as a technological breakthrough that will lead to “[a]n industrial revolution, an information revolution, and a renaissance—all at once.” It does not linger on documented AI risks, such as trust and safety, accuracy, intellectual property, privacy, cybersecurity, or bias and discrimination. Indeed, the word “safety” appears in the document just once. The plan is signed by Michael Kratsios, assistant to the president for science and technology; David Sacks, special advisor for AI and crypto; and Marco Rubio, in his capacity as assistant to the president for national security affairs.
The plan comprises three pillars: (i) Accelerate AI Innovation; (ii) Build American AI Infrastructure; and (iii) Lead in International AI Diplomacy and Security.
Pillar I: Accelerate AI Innovation
The first pillar, Accelerate AI Innovation, calls for extraordinary deregulatory measures, including several never seen before. It requires the “Federal government to create the conditions where private-sector-led innovation can flourish.” Under the subheading “Remove Red Tape and Onerous Regulation,” the plan orders the Federal Trade Commission (FTC), an independent administrative agency created by Congress in 1914 under the FTC Act, to review “investigations commenced under the previous administration to ensure that they do not advance theories of liability that unduly burden AI innovation.” In a remarkable departure from decades of legal tradition, it calls for the FTC to reopen, modify, or set aside existing orders, consent decrees, and injunctions “that unduly burden AI innovation.”
Echoing the recently defunct initiative of a federal AI regulatory moratorium, the plan states, “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds,” though it adds that “[it] should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation.” It requires the Office of Management and Budget to work with federal agencies that have AI-related discretionary funding programs “to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” While this language leaves much room for interpretation, it portends a policy of channeling federal funding to states that do not enact AI policies that are at odds with the Trump administration’s.
Under the subheading “Ensure that Frontier AI Protects Free Speech and American Values,” the plan orders the Department of Commerce through the National Institute of Standards and Technology (NIST) to revise the NIST AI Risk Management Framework, a foundational policy and governance document in the AI space, to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” It further requires an update to federal procurement guidelines to “ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.”
The plan encourages an environment of open-source and open-weight AI models, weighing in on an issue that has been fiercely contentious in industry, with different industry leaders, such as OpenAI and Meta, advocating opposing views. It pushes for more rapid adoption of AI in sectors ranging from healthcare to energy and agriculture, advocating for what it calls “a dynamic, ‘try-first’ culture for AI across American industry” and lamenting the current “distrust or lack of understanding of the technology, a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.”
Recognizing the profound — and potentially ominous — implications of AI for the labor force, the plan calls to “empower American workers in the age of AI.” It requires federal agencies to enhance AI literacy, skill development, and training; conduct research to assess AI’s impact on the labor market; and leverage available discretionary funding to fund rapid retraining for individuals impacted by AI-related job displacement.
Other initiatives under the first pillar include prioritizing investment in a wide range of new products powered by AI, including “autonomous drones, self-driving cars, robotics, and other inventions for which terminology does not yet exist”; increasing investment in AI research; building “world-class scientific datasets” (noting that “other countries, including our adversaries, have raced ahead of us in amassing vast troves of scientific data”); and combating deepfakes, particularly insofar as they can be used as evidence in legal proceedings.
Pillar II: Build American AI Infrastructure
The second pillar, Build American AI Infrastructure, invites a set of federal investments, deregulatory measures, and policy initiatives to boost the development of AI infrastructure, including “data centers, semiconductor manufacturing facilities, and energy infrastructure” while guaranteeing security. The plan states that “America’s environmental permitting system and other regulations make it almost impossible to build this infrastructure in the United States with the speed that is required. Additionally, this infrastructure must also not be built with any adversarial technology that could undermine U.S. AI dominance.”
The plan emphasizes the importance of cybersecurity in enhancing critical infrastructure. It notes, “the use of AI in cyber and critical infrastructure exposes those AI systems to adversarial threats. All use of AI in safety-critical or homeland security applications should entail the use of secure-by-design, robust, and resilient AI systems that are instrumented to detect performance shifts, and alert to potential malicious activities like data poisoning or adversarial example attacks.” It tasks the Department of Homeland Security with issuing and maintaining “guidance to private sector entities on remediating and responding to AI-specific [cyber] vulnerabilities and threats.” And it orders the government to “promote the development and incorporation of AI Incident Response actions into existing incident response doctrine and best-practices for both the public and private sectors.”
Specific initiatives under this pillar include massive enhancements to the electric grid; repatriating key parts of the semiconductor industry; and training a skilled workforce for AI infrastructure building, “including roles such as electricians, advanced [heating, ventilation, and air conditioning] technicians, and a host of other high-paying occupations.”
Pillar III: Lead in International AI Diplomacy and Security
The third pillar, Lead in International AI Diplomacy and Security, calls on government to “drive adoption of American AI systems, computing hardware, and standards throughout the world [...] while preventing our adversaries from free-riding on our innovation and investment.” It calls on US industry to “meet global demand for AI by exporting its full AI technology stack—hardware, models, software, applications, and standards—to all countries willing to join America’s AI alliance.”
This part of the plan is openly focused on countering the influence and clout of China’s AI industry around the globe. It rails against existing global AI efforts by organizations such as the Organization for Economic Co-operation and Development, Group of Seven, Group of 20, International Telecommunication Union, Internet Corporation for Assigned Names and Numbers, and others, stating that those initiatives “have advocated for burdensome regulations, vague ‘codes of conduct’ that promote cultural agendas that do not align with American values, or have been influenced by Chinese companies.”
Initiatives under this pillar include strengthening “AI compute export control enforcement”; plugging “loopholes in existing semiconductor manufacturing export controls”; boosting research into novel AI-driven national security risks in areas such as “cyberattacks and the development of chemical, biological, radiological, nuclear, or explosives (CBRNE) weapons, as well as novel security vulnerabilities”; and investing in biosecurity.
Conclusion
As with all federal action plans, the proof of the plan will be in its implementation, which here involves the Herculean bureaucratic task of coordinating between dozens of agencies. One thing is clear: with statements advocating for a “‘try-first’ culture” to AI and the deployment of federal government agencies to prioritize support for states based on their AI policy stance, the plan is breaking new ground.
[View source.]