The Trump Administration’s plan to win the AI race – a legal perspective

Eversheds Sutherland (US) LLP

Since the beginning of the second Trump Administration, we have seen a dramatic shift in US AI policy away from mitigating AI’s social and physical harms to promoting America’s AI domination on the world’s stage, advancing innovation and AI infrastructure, and removing regulatory barriers. The recent release of the White House’s “Winning the Race: America’s AI Action Plan” (AIAP or Plan),1 bolstered by three Executive Orders (EOs),2 is a resounding call for new agency priorities; new cooperative arrangements, rollbacks and reinterpretations of legal precedents; and new challenges to the states’ legal authority, all intended to ensure that the US prevails in the global AI race.

The AIAP makes clear that AI is a transformative technology that will “revolutionize the way we live and work” and calls for new initiatives to train the workforce in AI proficiency so that “our Nation’s workers and their families gain from the opportunities created.” The AIAP and EO#3 promote the Administration’s social agenda, requiring that large language models (LLMs) procured by the federal government be free from ideological bias and social agendas, such as references to diversity, equity and inclusion (DEI) and climate change. The National Institute of Standards and Technology (NIST) at the Department of Commerce (DOC) is instructed to revise the NIST AI Risk Management Framework (AI RMF) to remove references to DEI, perceived misinformation and climate change. Finally, the AIAP stresses that the nation must prevent “our advanced technologies from being misused or stolen by malicious actors,” and must monitor for emerging and unforeseen risks from AI. 

OVERVIEW

On January 23, 2025, three days into his second term, President Trump signed Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” that directed the development of the AIAP. The AIAP has three pillars: accelerate AI innovation, build American AI infrastructure and international diplomacy, and lead in International AI diplomacy and security. 

ACCELERATING AI INNOVATION

The AIAP envisions the federal government’s role as creating the conditions that will allow private-sector-led AI innovation to thrive. The following are some of the steps the Plan sets out for the federal government to take to ensure America has, and continues to have, the most powerful AI systems in the world.

Remove Red Tape and Onerous Regulation: So that the private sector can innovate without regulatory constraints, the Office of Management and Budget (OMB) will ask business and the public to identify federal regulations that hinder AI innovation and will take appropriate actions. OMB will also work with all federal agencies to identify, modify or repeal burdensome regulations, interpretations, orders and guidance that hinder AI innovation.

Encourage Open-Source and Open-Weight AI: Open-source models and weights are freely available to developers to download and modify, and they have the potential to expand AI innovation and catalyze research and experimentation. Open-source models can also be sources of malware and intellectual property misappropriation. The AIAP calls for ensuring that the US has the leading open-source models “founded on American values” so that these models can become the global standard in some areas. The Plan recommends improving the financial markets for these systems, increasing the research community’s access to world-class private sector AI models, and driving the adoption of open-source models by small and medium-sized businesses led by DOC’s National Telecommunications and Information Administration (NTIA).

Enable AI Adoption: The Plan notes that some critical US sectors have been slow to adopt AI due to factors such as distrust and a lack of clear governance and risk mitigation standards. To encourage a “try-first” culture in American industry, the Plan urges several measures, including the development of regulatory sandboxes to encourage researchers and industry to test AI tools and share the results. It also recommends that the Department of Defense (DOD) and the Office of the Director of National Intelligence (ODNI) prepare on an ongoing basis updated comparative studies of the adoption of AI by competitors and adversaries.

Invest in AI-Enabled Science: The Plan envisions that a new frontier of AI-enabled science will emerge and require new infrastructure and new kinds of scientific organizations. 

  • Through the National Science Foundation (NSF), the Department of Energy (DOE), NIST and other agencies, the US should invest in automated cloud-enabled labs for a variety of hard sciences in coordination with DOE’s National Laboratories. 
  • The US must lead in building the world’s largest and highest-quality AI-ready scientific datasets while respecting individual civil liberties, privacy and confidentiality. This initiative directs the National Science and Technology Council to develop minimum data quality standards for the use of biological, chemical and other scientific data modalities in AI model training. It contemplates creating secure compute environments within NSF and DOE and creating a whole-genome sequencing program for life on federal land that would include all biological domains.
  • The US must remain the leading pioneer in scientific breakthroughs through targeted investments in theoretical, computational and experimental research to discover new paradigms that advance the capabilities of AI.

Invest in AI Interpretability and Control Breakthroughs: The AIAP acknowledges that frontier AI models are poorly understood and technologists still cannot explain how LLMs produce a specific output. This lack of predictability makes it challenging to use advanced AI in many fields, including national defense and security. 

  • The Plan calls on the Defense Advanced Research Projects Agency (DARPA), with NIST’s Center for AI Standards and Innovation (CAISI) at DOC and NSF, to advance AI interpretability, AI control systems and adversarial robustness.
  • DOD, DOE, CAISI, the Department of Homeland Security (DHS), NFS and academia should hold an AI hackathon to solicit the best and the brightest in US academia to test AI systems for transparency, effectiveness, use control and security vulnerabilities. 

Invest in Next-Generation Manufacturing: Deeming it crucial that America and its trusted allies be world-class manufacturers of next-generation technologies, the Plan states that the federal government should prioritize investment in these technologies by direct federal investment and solving supply chain challenges. 

Expand NIST’s Mandate: Previously recognized for its comprehensive and widely adopted AI RMF, NIST is now positioned as a central architect of AI governance, evaluation and assurance across both public and private sectors. 

  • NIST’s key new responsibilities include leading the development of standardized methods for evaluating AI systems, including publishing guidelines for federal agencies and supporting the science of AI measurement. 
  • NIST will launch domain-specific initiatives (e.g., healthcare, energy, agriculture) to accelerate the adoption of national AI standards and measure productivity gains from AI deployment. 
  • In collaboration with the DOD and ODNI, NIST will refine frameworks for secure AI development and lead efforts to incorporate AI into federal incident response protocols, including making updates to cybersecurity playbooks. NIST will help develop technical standards for high-security AI data centers used by the military and intelligence community.
  • As noted above, consistent with the shift in federal priorities toward ideological neutrality and national security, the AI RMF will be revised to remove references to misinformation, DEI and climate change. 

Observations

  • Companies operating in regulated industries (e.g., healthcare, energy, finance) should anticipate sector-specific benchmarks and productivity metrics that may influence procurement, reporting and operational practices. 
  • Firms developing or deploying AI systems should align with NIST’s evolving standards for secure-by-design technologies and incident response readiness.
  • Businesses using frontier models or engaging in international AI partnerships should be aware of NIST’s role in evaluating ideological bias and foreign influence, which may affect procurement eligibility and reputational risk.
  • While the NIST AI RMF remains a foundational resource, its revision and NIST’s expanded role suggest that businesses—especially those contracting with the federal government—may face more prescriptive compliance expectations. 


“BUILD, BABY, BUILD” AI INFRASTRUCTURE

AI is challenging America’s power grid. While America’s energy generation infrastructure has remained stagnant since the 1970s, China has been rapidly building out its grid. The AIAP and EO#1 add a note of urgency to building out the US’ AI infrastructure by stating that America’s path to AI dominance depends on changing this trend and adopting various initiatives.

Create Streamlined Environmental Permitting for Data Centers, Semiconductor Manufacturing Facilities and Energy Infrastructure: Noting that America’s environmental permitting systems make it almost impossible to build the necessary AI infrastructure in the US with the speed required, the AIAP and EO#1 direct various initiatives:

  • Establish categorial exclusions under the National Environmental Policy Act (NEPA) to cover data center-related actions that normally do not have a significant effect on the environment. 
  • Explore the need for a nationwide Clean Water Act Section 404 permit for data centers.
  • Expedite environmental permitting by streamlining or reducing regulations promulgated under the Clean Air Act; the Clean Water Act; the Comprehensive Environmental Response, Compensation and Liability Act; the Toxic Substance Control Act; and the Endangered Species Act.
  • Make federal laws available for data center construction and the construction of power generation infrastructure.
  • Apply AI to accelerate environmental reviews.

Develop a Grid to Match the Pace of AI Infrastructure: Calling the US electric grid “one of the largest and most complex machines on Earth” and the “lifeblood of the modern economy,” the AIAP proposes the following to enhance and expand the grid to meet today’s and tomorrow’s needs:

  • Stabilize the grid of today by preventing the premature decommissioning of critical power generation resources (which are often coal-fired) to ensure sufficient power generation.
  • Use advanced grid management technologies and upgrades to power lines to enhance reliability and unlock additional power on the system.
  • Embrace new energy generation sources, such as enhanced geothermal, nuclear fission and nuclear fusion.

EMPOWER AMERICAN WORKERS 

The AIAP supports a “worker-first” AI agenda focused on AI upskilling and training while de-emphasizing regulation. It envisions AI accelerating productivity and creating entirely new industries, while recognizing it will also transform how work gets done across all industries and occupations. The Administration stresses that AI will demand a serious workforce response to help workers navigate that transition. The AIAP taps several federal agencies, led by the Department of Labor (DOL), Department of Education (ED), NSF and DOC, to take specific actions to ensure that AI creates pathways to economic opportunity for American workers by prioritizing AI skill development and training as core objectives of relevant education and workforce funding streams. 

Empower American Workers in the Age of AI: This initiative includes proposed actions for the DOL, in collaboration with other federal agencies, to:

  • prioritize AI skills development as a core objective of education and workforce funding streams, including career and technical education, apprenticeships and other federally supported skills initiatives 
  • establish the AI Workforce Research Hub to lead a sustained federal effort to evaluate AI’s impact on the labor market and the American worker and supply the hub with analysis to support the tracking of AI adoption, job creation, job displacement and wage effects
  • fund rapid retraining for individuals impacted by AI-related job displacement
  • pilot new approaches to meet workforce challenges created by AI, which may include areas such as rapid retraining models to respond to labor market shifts and new models to support pathways into entry-level roles

Train a Skilled Workforce for AI Infrastructure: This initiative includes proposed actions for the DOL, in collaboration with other federal agencies, to:

  • create a national initiative identifying high-priority occupations critical to AI infrastructure and expand apprenticeships for such occupations
  • partner with state and local governments and various stakeholders to support the creation of industry-driven training programs for priority AI infrastructure occupations
  • partner with education and various stakeholders to expand early career exposure programs and pre-apprenticeship opportunities for middle and high school students in AI infrastructure occupations 

CYBERSECURITY MEASURES

The AIAP includes several provisions that address cybersecurity threats to AI systems as well as cybersecurity threats from adversarial uses of AI systems. Those provisions reflect the need both to protect American AI innovation and to protect against—and recover from—AI-fueled cyberattacks.

Bolster Critical Infrastructure Cybersecurity: The AIAP addresses both how critical infrastructure providers can use AI for network defense and how their use of AI can expose them to adversarial threats. It recommends that critical infrastructure providers should deploy robust, resilient, secure-by-design AI systems that can detect performance shifts and alert administrators to potential malicious activities. To support that goal, the AIAP calls for:

  • establishing an AI Information Sharing and Analysis Center (AI-ISAC)
  • having DHS issue and maintain guidance on remediating and responding to AI-specific vulnerabilities and threats
  • ensuring collaborative and consolidated sharing of known AI vulnerabilities with the private sector, leveraging existing cyber vulnerability-sharing mechanisms

Protect Commercial and Government AI Innovations: The AIAP advocates balancing the promotion of cutting-edge AI technologies with addressing national security risk. It calls for DOD, DHS, DOC, and members of the Intelligence Community (IC) to collaborate with leading private sector actors to actively protect AI innovations from malicious cyber actors, insider threats and other security risks.

Promote Mature Federal Capacity for AI Incident Response: The AIAP calls for the federal government to update and revise incident response planning, doctrine and best practices to account for AI adoption, particularly in critical infrastructure sectors. Specifically, the AIAP calls for:

  • NIST and CAISI at DOC to partner with industry to provide necessary resources to incident response teams
  • CAISI to update US government incident response playbooks to incorporate AI systems and responsible officials
  • DOD, DHS and ODNI, in coordination with relevant White House offices, to encourage the responsible sharing of AI vulnerability information as part of their efforts to implement Executive Order 14306.

Promote Secure-by-Design AI Technologies and Applications: Warning that AI systems can be vulnerable to data poisoning and other malicious attacks, and with a specific reference to national security applications, the AIAP calls for a focus on promoting resilient and secure AI development and deployment, including through interagency efforts to refine DOD’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits and publication of an Intelligence Community Standard on AI Assurance.

NATIONAL SECURITY MEASURES

Reflecting a growing consensus on the national security risks and opportunities posed by the rapid advancement of AI capabilities, the AIAP includes provisions to advance AI-related national security interests through multinational diplomacy; measures to guard against the risk that adversaries could use AI to advance chemical, biological, radiological and nuclear weapons programs; and provisions to advance the secure use of AI by the DOD and the IC.

Counter Chinese Influence in International Governance Bodies: The AIAP acknowledges the value of like-minded nations advancing their shared values regarding AI through international organizations standards-setting bodies. It warns, however, that those efforts often result in unnecessary provisions or codes of conduct that may not reflect American values and are sometimes influenced by Chinese efforts to shape standards for facial recognition and surveillance. The AIAP calls for the Department of State (DOS) and DOC to advocate more robustly for agreements that promote innovation, reflect American values and counter authoritarian influence.

Ensure That the US Government Is at the Forefront of Evaluating National Security Risks in Frontier Models: Addressing the risk that the most powerful AI systems would provide adversarial nations with an ability to accelerate both cyberattacks and the development of chemical, biological, radiological, nuclear or explosive (CBRNE) weapons, the AIAP calls for CAISI at DOC (i) to work with relevant agencies to evaluate and assess how reliance on foreign AI systems in critical infrastructure applications could result in security vulnerabilities or opportunities for malign influence and (ii) to recruit leading AI researchers at key federal agencies who can collaborate with research institutions to ensure cutting-edge evaluations and analyses of AI systems.

Invest in Biosecurity: The AIAP calls for a multitiered approach, in coordination with allies and partners, to prevent malicious actors from using AI to synthesize harmful pathogens and other biomolecules. Recommended steps include:

  • greater requirements and enhanced technical screening of nucleic acid synthesis tools at all institutions receiving federal funding for scientific research
  • facilitating data sharing between nucleic acid synthesis providers to screen for potentially fraudulent or malicious customers
  • collaboration across DOC, national security agencies and research institutions to develop and implement national security-related AI evaluations.

Drive Adoption of AI Within the Department of Defense: Recognizing the potential of AI to transform warfighting and national defense, the AIAP calls for DOD and the armed forces to aggressively adopt secure and reliable AI solutions. The AIAP specifically tasks DOD with several actions, including:

  • identifying and developing the necessary talent to drive the effective deployment of AI-enabled capabilities
  • establishing an AI & Autonomous Systems Virtual Proving Ground at DOD
  • identifying operational and enabling functional workflows best suited to AI automation and then transitioning those workflows to AI solutions as quickly as practicable
  • establishing priority DOD access to computing resources in the event of a national emergency, including through agreements with cloud service providers
  • developing AI-focused programs at the Senior Military Colleges.

Build High-Security Data Centers for Military and Intelligence Community Usage: Anticipating that AI will soon be used to process some of the US government’s most sensitive data, the AIAP calls for improving protection at relevant data centers, including against nation-state actors. Relevant measures include:

  • directing DOD, the IC, the National Security Council, NIST and CAISI, working with industry and Federally Funded Research and Development Centers, to develop new standards for high-security data centers
  • recommending that agencies accelerate adoption of classified compute environments that can support secure AI workloads

EXPORT PROMOTION AND EXPORT CONTROL MEASURES

The AIAP and EO#2 take a new approach to the export of US-developed AI technology by encouraging the adoption of US solutions—rather than adversaries’ competing products—through the promotion of “full AI technology stack” export packages. At the same time, however, the AIAP calls for using “creative approaches” to limit adversaries’ access to advanced AI compute and for developing greater consensus regarding US export control priorities among allies and partners.

Export American AI to Allies and Partners: The AIAP takes a new approach to export controls relating to AI hardware and software, calling for the United States to meet global demand by exporting its “full AI technology stack” to “all countries willing to join America’s AI alliance” and to keep those countries from turning instead to adversaries and rivals. Specifically, the AIAP calls for:

  • establishing a program at DOC to receive industry proposals for full-stack AI export packages
  • tasking the Economic Diplomacy Action Group, the US Trade and Development Agency, the Export-Import Bank, the US International Development Finance Corporation and DOS to work with DOC to facilitate deals that meet US-approved security requirements and standards.

Strengthen AI Compute Export Control Enforcement: The AIAP calls for “creative approaches” to export control enforcement to deny adversaries access to “advanced AI compute.” Recommended measures include:

  • using current and new location verification features to ensure that chips for advanced AI compute are not in countries of concern
  • a collaborative effort between DOC and IC to advance global chip export control enforcement, including monitoring emerging technology developments in AI compute and using that knowledge to ensure intelligence coverage of, and end-use monitoring in, countries or regions where chips are being diverted

Plug Loopholes in Existing Semiconductor Manufacturing Export Controls: The AIAP calls for the United States to protect the national security advantage afforded by its lead in semiconductor manufacturing by closing gaps in, and enhancing enforcement of, semiconductor manufacturing export controls. This would include new export controls on semiconductor manufacturing subsystems.

Align Protection Measures Globally: To bolster strong American export controls on sensitive AI-related technologies, the AIAP calls for encouraging partners to follow US controls and to use measures such as the Foreign Direct Product Rule and secondary tariffs if they fail to do so. The AIAP’s recommended measures include:

  • mitigating risks from strategic adversaries by sharing information on complementary technology protection measures, including in basic research and higher education
  • creating an AI global alliance by inducing key allies to adopt complementary AI protection systems and export controls across the supply chain, especially with regard to key adversaries
  • developing options for international cooperation to protect the AI tech stack beyond multilateral treaty bodies while also leveling the playing field by aligning US and allied controls
  • encouraging allies to adopt US export controls, to work with the US to develop new controls and to prevent adversaries from providing AI solutions to their defense industries or acquiring controlling stakes in defense suppliers

Observations:

  • Cybersecurity threats to and from AI systems remain a consistent focus of federal efforts, with companies in critical infrastructure sectors facing particular scrutiny.
  • The AIAP includes new direction to accelerate adoption of AI solutions at the Department of Defense and in the Intelligence Community, which will in both cases require extensive testing and significant security measures.
  • The AIAP’s goal of promoting American competitiveness through the export of “full AI technology stack” packages will require significant discussion and policy development, offering private sector stakeholders greater opportunity to shape US export policy for advanced AI compute capabilities.


ROLLBACKS

Review of FTC Authorities and Rollback of FTC Orders: As part of its deregulatory agenda, the Plan calls on the Federal Trade Commission (FTC) to review all its investigations under the Biden Administration to “ensure they do not advance theories of liability that unduly burden AI innovation.” FTC should also set aside any consent decrees, final orders and injunctions that unduly burden AI innovation.

PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT 

EO#3 argues that ideological biases and social agencies, such as DEI, when they are built into AI models, can distort AI outputs. As a result, federal agencies are ordered to procure only LLMs that are developed in accordance with two principles: “truth-seeking” (LLMs must be truthful in responding to user prompts) and “ideological neutrality” (LLMs must be neutral, nonpartisan tools that do not manipulate responses in favor of ideological dogmas such as DEI). 

INDIRECT STATE PREEMPTION

Tie Budgetary Decisions to a State’s AI Regulatory Climate: The AIAP states that the federal government should not direct federal funds toward states with a “burdensome” regulatory climate, but at the same time it should not interfere with the right of states to pass prudent laws. The term “burdensome” is not defined and could be read to extend to state cybersecurity and privacy laws and regulations that apply to AI. To carry out this policy, the AIAP directs OMB, working with other federal agencies that have discretionary AI-related funding, to consider a state’s AI regulatory climate when making funding decisions and to limit funding if the regulatory climate would hinder the effectiveness of the federal funding.

CONCLUSION

While the AIAP serves a rhetorical and political agenda, it ultimately promotes a bold vision for US leadership in the development of this revolutionary technology, all while tackling a substantial range of AI issues that are top of mind for the American public, such as job displacement and national security, and while pressing ahead with an aggressive agenda to build out the science, infrastructure and financing critical to AI innovation. Despite the federal de-emphasis on regulation and enforcement, organizations should nonetheless pay careful attention to state regulation and take the steps necessary to have appropriate AI cybersecurity (especially when providing the federal government with advanced technologies), export control compliance for overseas expansion and contractual allocations of risk. The lack of federal regulations and enforcement around AI guardrails also may open up greater latitude for private litigants. 

__________

1 “Winning the Race: America’s AI Action Plan,” White House, July 23, 2025. America’s AI Action Plan.
2 “Accelerating Permitting of Data Center Infrastructure,” Executive Order, July 23, 2025 (EO#1); “Promoting the Export of the American AI Technology Stack,” Executive Order, July 23, 2025 (EO#2); “Preventing Woke AI in the Federal Government,” Executive Order, July 23, 2025 (EO#3).

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Eversheds Sutherland (US) LLP

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide