A Call to Action: President Trump’s Policy Blueprint for AI Development and Innovation

Morrison & Foerster LLP

On July 23, 2025, President Trump released his Artificial Intelligence (AI) Action Plan, with the aim of ushering in an era of American dominance in the rapidly emerging technology. The AI Action Plan delivers on a promise first made in Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” to set forth a new American policy on developing and harnessing AI. To implement this new policy, President Trump also issued three new Executive Orders. These Executive Orders focus on “Preventing Woke AI in the Federal Government,” “Accelerating Federal Permitting of Data Center Infrastructure,” and “Promoting the Export of the American AI Technology Stack.” At their core, the AI Action Plan and accompanying Executive Orders focus on reducing or eliminating perceived obstacles to AI development that the administration believes will cause the U.S. to fall behind its competitors unless addressed.

We discuss the key tenets of President Trump’s AI Action Plan and the accompanying Executive Orders below, with emphasis on impact to government contractors and subcontractors and export control considerations. For additional detail about the Executive Orders and AI action plan, see our detailed analysis.

Reducing Regulatory Burdens

Consistent with Executive Order 14179 (which was issued on January 23, 2025), a primary focus of President Trump’s AI Action Plan is reducing bureaucratic red tape that the administration fears could stifle innovation. To this end, the Plan calls on the Office of Science and Technology Policy (OSTP) to engage with industry about the state of federal regulation and on the Office of Management and Budget (OMB) to reduce regulation of AI and otherwise review and revise guidance and policy statements that may hinder AI development. Given the paucity of federal regulations that specifically regulate AI, we expect this scrutiny will focus on rolling back and replacing any existing agency guidance regarding AI (for example, the Department of Defense (DoD) AI Principles and AI adoption strategies issued over the past few years). Government contractors therefore should reassess any ongoing efforts to comply with agency guidance or voluntary frameworks that predate the AI Action Plan, especially those issued by the Biden administration.

Notably, the AI Action Plan also suggests that the federal government will withhold AI-related federal funding from states that attempt to regulate AI in ways with which the administration disagrees. This sets the stage for potential conflicts between the federal government and states like California and Colorado, which have taken an active role in regulating AI, and creates an area where state and federal law may conflict (similar to the way some states and the federal government now have potentially divergent requirements as pertains to diversity, equity, and inclusion (DEI)). Companies using or offering AI technologies that hold both state and federal contracts should proceed cautiously as they compete for future awards, as attempts to comply with state regulatory schemes that are disfavored by the Trump administration could draw unwanted scrutiny, as has been the case, for instance, where there are conflicts between federal and state DEI-related/affirmative action requirements.

Eliminating “Woke” Ideology

The AI Action Plan, accompanied by the new Executive Order on “Preventing Woke AI in the Federal Government,” continues President Trump’s crusade against DEI and climate change. It directs the Department of Commerce (DOC), through the National Institute of Standards and Technology (NIST), to revise the NIST AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” It also calls for the updating of federal procurement guidelines to “ensure that the government only contracts with frontier large language model (LLM) developers who ensure that their systems are objective and free from top-down ideological bias.”

It is unclear what this new mandate means for government contractors in practice. AI models train on substantial troves of data, and it is not apparent that such training can easily be undone to align models with the Trump Administration’s policies on DEI and climate change. Moreover, LLM developers are already intensely focused on managing bias within datasets to improve the accuracy of outputs, albeit different types of bias.

In any event, as with the administration’s prior anti-DEI Executive Orders, the purpose of the Woke AI Executive Order appears to be to use federal funding as both a carrot and a stick to influence private industry conduct. In this case, the Woke AI Executive Order aims to drive “ideological neutrality” by prohibiting LLM developers from “manipulat[ing] responses in favor of ideological dogmas such as DEI” or “encod[ing] partisan or ideological judgments into an LLM’s outputs unless those judgments are prompted by or otherwise readily accessible to the end user.” It empowers agencies to enforce this requirement not only by permitting consideration of compliance with the Order when awarding government contracts to LLM developers, but also by including terms in federal contracts (including modifying existing contracts, where practicable) that hold LLM developers responsible for “decommissioning costs” in the event of an early termination of a contract for the developer’s noncompliance. Moreover, the Executive Order contemplates that contractors may need to disclose “the LLM’s system prompt, specifications, evaluations, or other relevant documentation” to demonstrate compliance with the mandate for “ideological neutrality.”

In light of the foregoing, government contractors should anticipate government scrutiny of AI models and their outputs. In particular, contractors may face an increased risk of whistleblower activity under the False Claims Act, as employees or other users of AI services identify potential opportunities for financial windfalls by alleging LLM developers are knowingly not complying with the mandate for ideological neutrality. LLM developers therefore should implement processes and procedures that document the contemporaneous understanding of the AI company’s legal obligations and the steps the company is taking to comply with the Executive Order. The anticipated creation of new agency guidance aligned with the AI Action Plan should facilitate the creation of these processes and procedures.

Provide New Funding for AI and AI-Adjacent Projects

The AI Action Plan also announces a new policy aimed at bolstering AI infrastructure development domestically. Specifically, the Executive Order on “Accelerating Federal Permitting of Data Center Infrastructure” calls for the Secretary of Commerce to launch an initiative to provide financial support—to include grants, loans and loan guarantees, tax incentives, and offtake agreements—for qualifying projects for data center or related component development (such as energy infrastructure, semiconductors and semiconductor materials, networking equipment, and data storage). Moreover, to eliminate potential obstacles to such development efforts, the Executive Order calls for the streamlining of environmental permitting requirements, as well as making federal lands available for construction projects.

The AI Action Plan further notes that the federal government will invest in “theoretical, computational, and experimental research to preserve America’s leadership in discovering new and transformative paradigms that advance the capabilities of AI.” The federal government, through the Defense Advanced Research Projects Agency (DARPA), will also launch a technology development program to advance AI interpretability (i.e., how a model generates its output), AI control systems, and adversarial robustness.

All in, the federal government appears poised to increase spending on AI-related projects substantially in the coming years. Moreover, the focus on theoretical and experimental research signals that the Trump administration also has an eye on the next big technology—whether it be quantum computing or something as yet unknown.

Focus on Cybersecurity

The AI Action Plan acknowledges that greater reliance on AI will necessitate even greater emphasis on cybersecurity, especially as agencies such as the DoD or the Intelligence Community (IC) adopt AI in their activities. The AI Action Plan therefore calls for collaboration and information sharing between agency leaders and leading American AI developers to protect AI technology from security threats. The Plan contemplates the Department of Homeland Security taking a leading role in information sharing through the establishment of an AI Information-Sharing and Analysis Center.

The information-sharing model identified in the AI Action Plan is not unlike those already in existence between the defense industrial base and the DoD and the IC. It therefore may be possible, as contemplated by the AI Action Plan, that existing information-sharing arrangements can subsume the sharing contemplated by the AI Action Plan, to avoid unnecessary duplication.

Focus on Export Controls and Supply Chain

The AI Action Plan outlines strategies for America to lead in international AI diplomacy and security, through promoting exports and enhancing export and supply chain controls in key areas. These strategies intend to drive adoption globally of American AI systems, computing hardware, and standards, while also managing competition with “adversaries.”

On export promotion, the AI Action Plan—and related Executive Order 14320—calls for an “American AI Exports Program” and tasks the Department of Commerce to gather proposals from “industry consortia” for “full-stack export packages,” followed by interagency coordination to facilitate deals that meet U.S.-approved “security requirements and standards.” The Plan is explicit that the intent is to use the distribution and diffusion of U.S. technology to “stop our strategic rivals from making our allies dependent on foreign adversary technology.” Against the backdrop of the Trump administration’s rescission of the “AI Diffusion” in May of this year, the Plan portends a relative easing of export controls (especially with respect to U.S. partners and allies) on key items and U.S. Person activities that support global adoption of the AI “technology stack,” including “hardware, models, software, applications, and standards.” On the other hand, the Plan’s references to “security requirements and standards” indicate the administration intends to pursue licensing conditions and other regulatory constraints that impose limitations on the acquisition and use of these items and services, for example, limitations on end-uses and end-users permitted to train or host models or software. Notably, the Executive Order states that proposals must “comply with all relevant United States export control regimes, outbound investment regulations, and end-user policies, including [the Export Control Reform Act (ECRA)], and relevant guidance from the Bureau of Industry and Security (BIS).” Ultimately, companies – especially non-U.S. companies – should be prepared to manage compliance with U.S. export control requirements imposed through license conditions on these items and activities.

The AI Action Plan outlines three ways in which export controls should be tightened to deny “foreign adversaries” access to advanced AI compute and related resources.

  • First, the Plan calls for “creative approaches to export control enforcement,” including location verification mechanisms and collaborations with the IC to expand and enhance global monitoring, especially related to “possible countries or regions where chips are being diverted.” These recommendations follow recent reports that BIS has increased enforcement focus in several Southeast Asian nations, and that Singapore, Malaysia, and Taiwan have adopted or are in the process of adopting their own controls or enforcement strategies to address shipments (or transshipments) of AI-related hardware to sensitive end-users.
  • Second, the Plan calls for the Commerce Department to “plug loopholes” in controls on semiconductor manufacturing equipment, in particular calling for “new” controls on semiconductor manufacturing subsystems (i.e., components used to produce wafer fab equipment). Notably, the Plan does not expressly call for expanded restrictions on equipment generally, though it does emphasize the need to align allies’ controls generally with existing U.S. controls, which could have significant implications in this area.
  • Third, the Plan calls for broad alignment of partner and allied controls with U.S. export controls, including through the use of extraterritorial controls—in particular the Foreign Direct Product Rule and “secondary tariffs”—to achieve greater alignment and “induce” key allies to adopt protection systems and controls. The Plan also calls for plans to develop and implement “complementary technology protection measures,” including in “basic research and higher education,” terms that portend enhanced focus on international research security and deemed exports. Finally, the Plan notes that the Departments of Commerce and Defense should coordinate with allies to ensure they prohibit U.S. adversaries from “supplying their defense-industrial base or acquiring controlling stakes in defense suppliers.” This parallels calls elsewhere in the Plan for security guardrails to prohibit adversaries from inserting sensitive inputs to domestic AI infrastructure, and to keep AI infrastructure “free from foreign adversary information and communications technology and services (ICTS).” Both suggest a desire to align the AI Plan with efforts to keep foreign adversaries out of critical supply chains, including the Commerce Department’s Information and Communications Technology Supply Chain authorities, CFIUS, and government contractor supply chain controls that apply to the U.S. government and its contractors and subcontractors.

With these objectives and statements in mind, companies operating in the AI and semiconductor industries and in geographic regions of focus for export controls and related authorities should:

  • Understand their exposure to U.S. jurisdiction and transshipment risks, especially in the context of BIS’s expectations regarding “knowledge” of restricted end uses and end users and relevant “red flags”;
  • Prepare to manage compliance with U.S. export control requirements and related policies, whether imposed through license conditions or extraterritorial controls;
  • Have plans in place to respond to end-use checks and compliance outreach to mitigate the risk and expense of violations or protracted multi-jurisdictional investigations;
  • To the extent possible, ensure commercial arrangements provide for responding to changes in AI-related export controls.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morrison & Foerster LLP

Written by:

Morrison & Foerster LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Morrison & Foerster LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide