Trump Administration Releases Action Plan and Executive Orders for “AI Dominance”

Woods Rogers
Contact

Woods Rogers

On July 23, 2025, the Trump Administration issued a 28-page action plan (the Action Plan or the Plan) and three corresponding Executive Orders designed to “win the AI race” and achieve “global dominance” in the AI marketplace.  

The Plan identifies three key pillars and more than 90 proposed policy actions to promote U.S. advancement in AI. The three pillars include (1) AI innovation; (2) AI infrastructure, and (3) international diplomacy and security.

Let’s take a look at each pillar and the key policy proposals they include.

Pillar 1: AI Innovation

The first pillar of the Plan seeks to “create the conditions” where private-sector-led AI innovation can flourish. With that objective in mind, the first pillar contains an array of different policy proposals aimed at reducing regulations, promoting workforce development, and encouraging “open-source and open-weight AI.” Key policy proposals contained in the first pillar include the following:

  • Tether Federal Funds to State AI Regulations: The Plan calls on OMB to work with federal agencies with “AI-related discretionary funding” to assess a state's regulatory landscape when deciding whether to award funds. This proposal appears to be an effort to impose a quasi-moratorium on state-level AI regulations, or at the very least signal to states that they should tread lightly when contemplating new AI regulations. This proposal also appears to be a callback to the recent efforts by Congress to impose a 10-year moratorium on state AI regulations in the One Big Beautiful Bill Act. The proposed 10-year moratorium ultimately failed due to lack of Congressional support.
  • Revise the NIST AI Risk Management Framework: The Plan calls on the Department of Commerce (DOC), through the National Institute of Standards and Technology (NIST), to revise the NIST AI Risk Management Framework with a focus on eliminating “references to misinformation, Diversity, Equity, and Inclusion, and climate change.”
  • Create AI Regulatory Sandboxes: The Plan calls for the establishment of “regulatory sandboxes” or “AI Centers of Excellence” where AI researchers, startups, and established enterprises will be able to deploy and test new AI tools and applications without concern for incurring regulatory fines or penalties.
  • Emphasis on AI Training and Workforce Development: The Plan states that the administration “supports a worker-first AI agenda” and calls for the establishment of a workforce research hub within the Department of Labor. The Plan also directs the Labor Department to “leverage available discretionary funding, where appropriate, to fund rapid retraining for individuals impacted by AI-related job displacement.” In addition, the Labor Department is tasked with issuing “clarifying guidance” to help states identify eligible dislocated workers in sectors undergoing significant structural change tied to AI adoption, as well as guidance clarifying how state Rapid Response funds can be used to proactively upskill workers at risk of future displacement.
  • Promote AI Adoption and Integration in the U.S. Department of Defense (DOD): The Plan calls for DOD to develop a streamlined process for “classifying, evaluating, and optimizing” workflows involved in major operational and enabling functions. In addition, DOD should take steps to develop a list of priority workflows for automation with AI and when an identified workflow is successfully automated, DOD should “strive to permanently transition that workflow to the AI-based implementation as quickly as practicable.”
  • Combat Synthetic Media and Deepfakes: The Plan calls on the U.S. Department of Justice (DOJ) to issue guidance to federal agencies that engage in adjudications to explore adoption of a “deepfake standard” similar to the proposed Federal Rules of Evidence Rule 901(c) under consideration by the Advisory Committee on Evidence Rules.

Pillar 2: AI Infrastructure

The second pillar of the Action Plan highlights the importance of revamping the U.S. energy infrastructure and facilitating domestic AI manufacturing. Though focused on infrastructure, the second pillar also contains notable policy proposals for U.S. cybersecurity. Notable proposals contained in the second pillar include the following:

  • Streamline Permit Process for AI Data Centers: Expediting and modernizing permits for data centers and semiconductor fabs, as well as creating new national initiatives to increase high-demand occupations like electricians and HVAC technicians.
  • Bolster Semiconductor Manufacturing: The Plan calls on the CHIPS Program Office within DOC to remove “all extraneous policy requirements for CHIPS-funded semiconductor manufacturing projects” in the United States. In addition, the Plan calls on DOC and other federal agencies to collaborate on streamlining regulations that “slow semiconductor manufacturing efforts.”
  • Establish AI Information Sharing and Analysis Center (AI-ISAC): The Plan calls for the Department of Homeland Security (DHS) to collaborate with the Office of the National Cyber Director and the Center for AI Standards and Innovation to establish AI-ISAC and promote the sharing of AI-security threat information and intelligence across U.S. critical infrastructure sectors.
  • Promote Adoption of AI Incident Response Plans: The Plan states that the federal government should “promote the development and incorporation of AI Incident Response actions into existing incident response doctrine and best-practices for both the public and private sectors.” With this objective in mind, the Plan calls on NIST, in conjunction with CAISI, to partner with private sector entities in the AI and cybersecurity sector to ensure AI is included in the establishment of standards, response frameworks, best practices, and technical capabilities (e.g., fly-away kits) of incident response teams.
  • Promote Secure-by-Design AI Technologies: The Plan highlights the risks to AI systems from various adversarial inputs, including data poisoning and privacy attacks. To help combat these risks and promote secure-by-design AI technologies, the Plan calls on DOD to collaborate with NIST and ODNI, to work on refining DOD’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits. In addition, the Plan calls on ODNI to coordinate with DOD and CAISI to publish an IC Standard on AI Assurance under the auspices of Intelligence Community Directive 505 on Artificial Intelligence. 

Pillar 3: International Diplomacy and Security To Promote U.S.-Developed AI

The third pillar of the Plan seeks to drive the adoption of “American AI systems, computing hardware, and standards throughout the world.” To promote adoption, the Plan seeks to meet global demand for AI by exporting a U.S.-developed “full AI technology stack” that would include AI hardware, models, software, applications, and standards. This AI technology stack would be accessible “to all countries willing to join America’s AI alliance.” The development of this AI export stack would be overseen by DOC. Other notable policy proposals contained in the third pillar of the Action Plan include the following:

  • Strategic Global AI Alliance: The Plan calls for the development of a “technology diplomacy strategic plan” to forge an AI global alliance. The objective of this strategic plan would be to align incentives and policy levers across government to “induce key allies to adopt complementary AI protection systems and export controls across the supply chain.” The strategic plan would aim to help ensure that American allies do not supply adversaries with AI technologies on which the U.S. is seeking to impose export controls (more on this below).
  • Strengthen AI Export Controls: The Plan seeks to increase enforcement of U.S. export controls as they relate to AI products and services. Specifically, the Plan calls on the DOC and other federal agencies to collaborate with industry to “explore leveraging new and existing location verification features on advanced AI compute to ensure that the chips are not in countries of concern.”
  • Evaluate National Security Risks Posed by Frontier AI Models: The Plan calls on CAISI to collaborate with other federal agencies to “evaluate frontier AI systems for national security risks in partnership with frontier AI developers.” In addition, the Plan calls on CAISI to “evaluate and assess potential security vulnerabilities and malign foreign influence arising from the use of adversaries’ AI systems in critical infrastructure and elsewhere in the American economy, including the possibility of backdoors and other malicious behavior.” The Plan states that these security evaluations should include assessments regarding the capabilities of U.S. and adversary AI systems, the adoption of foreign AI systems, and the state of international AI competition.

AI Executive Orders

In addition to the Action Plan, President Trump signed three Executive Orders (“EOs”) that were drafted to specifically align with the policy proposals contained in the Plan. The EOs include:

  1. Promoting The Export of the American AI Technology Stack: This EO directs the State Department, Commerce Department, and Office of Science and Technology Policy (“OSTP”) to coordinate and create an American AI Exports Program that would aid in the deployment of full-stack AI export packages. These agencies have until October 21, 2025, to get the program up and running. This EO also directs the Economic Diplomacy Action Group to utilize federal financing tools to aid in deploying the full-stack AI export packages.
  2. Accelerating Federal Permitting of Data Center Infrastructure: This EO directs the Commerce Department and OSTP to coordinate an initiative that would provide financial support for the development of data center projects. The financing options would include offering loans, loan guarantees, tax incentives, and so forth. In addition, this EO contains provisions aimed at streamlining the approval processes for data center projects. For example, the EO directs the Environmental Protection Agency to issue guidance to aid in expediting data center environmental reviews.
  3. Preventing Woke AI in the Federal Government: This EO seeks to ensure that the federal procurement process avoids the use of AI systems that contain “ideological biases or social agendas.” There are two “unbiased AI principles” that the EO indicates should be used to evaluate procurement of AI large language models (LLM):
    1. Whether the LLM is “truthful” in responding to user prompts seeking factual information or analysis; and
    2. Whether the LLM are “ideologically neutral.”

The Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of OSTP, is directed to issue guidance to federal agencies regarding AI procurement within 120 days of the EO (i.e., November 20, 2025). Once this guidance is released, federal agencies have 90 days to adopt specific procedures designed to adhere to the guidance.

What The AI Action Plan Means for U.S. Companies

U.S. companies currently using AI tools, or contemplating adopting AI tools, can likely breathe a sigh of relief. The Action Plan and EOs generally do not seek to impose any new regulatory requirements or compliance obligations on private sector entities. Rather, the Action Plan and EOs seek to remove and prohibit federal regulations that could potentially hinder AI development and deployment in the United States. The Action Plan and EOs also extend a metaphorical olive branch to the private sector by asking for input and recommendations on strategies for removing regulatory barriers that could hinder AI innovation.

Despite the proposed “deregulation” of AI at the federal level, there remain a myriad of state AI regulations that U.S. companies must continue to navigate, including the Colorado AI Act, Utah’s Artificial Intelligence Policy Act, the Texas Responsible AI Governance Act, among others. In addition, U.S. companies operating abroad also must be prepared to comply with a growing list of international AI laws, such as the EU AI Act, South Korea’s Basic Act, Japan’s AI Promotion Act, and others. As a result, U.S. companies should continue keeping AI regulatory compliance top of mind to avoid potential domestic or international regulatory scrutiny.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Woods Rogers

Written by:

Woods Rogers
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Woods Rogers on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide