[co-author: Stephanie Kozol]*
On July 23, President Trump announced efforts to position the U.S. at the forefront of the global artificial intelligence (AI) race. “Winning the AI Race: America’s AI Action Plan” details how the federal government will advance the AI industry and was issued pursuant to the president’s January 23 Executive Order (EO) 14179, “Removing Barriers to American Leadership in Artificial Intelligence.”
The plan outlines three strategic pillars as its foundation, which include more than 90 federal policy actions. The pillars are titled “Accelerating Innovation,” “Building American AI Infrastructure,” and “Leading in International Diplomacy and Security.” The president also contemporaneously signed three executive orders to implement the plan. They include “Preventing Woke AI in the Federal Government,” directing federal agencies to procure only ideologically neutral large language models (LLM); “Accelerating Federal Permitting of Data Center Infrastructure,” to provide federal lands and resources for AI data centers; and “Promoting the Export of the American AI Technology Stack,” to create and implement the “American AI Exports Program.”
These measures represent a continuation of the administration’s outspoken efforts to reduce AI innovation roadblocks and avoid heavy-handed AI governance regulation. While widely supported by the AI industry, the approach will likely garner opposition from consumer groups and state regulators.
Accelerating Innovation
The plan notes that integrating AI technology is often difficult due in part to “a complex regulatory landscape, and a lack of clear governance and risk mitigation standards.” It seeks to facilitate such integration through regulatory sandboxes, or “AI Centers of Excellence,” to be established nationwide, allowing industry to “rapidly deploy and test AI tools while committing to open sharing of data and results.”
The administration is also using the plan to advance broader political and policy objectives. The Department of Commerce and the National Institute of Standards and Technology (NIST) are directed to remove references to “misinformation,” climate change, and diversity, equity, and inclusion in the NIST AI Risk Management Framework. Federal procurement guidelines must now be adjusted to ensure that any government-used LLM is “objective and free from top-down ideological bias.”
Building American AI Infrastructure
A well-publicized (and controversial) effect of AI proliferation is the technology’s reliance on data centers, which consume substantial amounts of energy and take up vast areas of land. The plan supports an expedited permitting process for such data centers to “further promote” rapid development of AI technologies. Relatedly, it also recommends upgrades to the U.S. electrical grid to support future energy-intensive industries and directs the Department of Defense and NIST to develop new technical standards for high-security AI data centers. Finally, the effort seeks to identify and properly train an increased number of workers in trades that support AI infrastructure, such as electricians and HVAC technicians.
Leading in International Diplomacy and Security
To promote American AI technologies globally, the Department of Commerce and the State Department are to assist American industries in delivering “secure, full-stack AI export packages – including hardware, models, software, applications, and standards” to American allies. Companies with such “full stack” AI export programs must comply with pertinent export control frameworks. Further, as China is reportedly spending considerable sums on AI development without the use of Western-made microchips, the plan addresses countering Chinese influence over AI policy development in international governance bodies. It also seeks to strengthen AI controls to prevent U.S. adversary use and address loopholes in existing semiconductor manufacturing export controls.
Takeaways
The U.S. government’s approach differs substantially from the EU, which enacted the “AI Act” in late 2024. The EU AI Act is a comprehensive AI governance law that outlines risk-based regulations of AI systems and allows participating countries to establish AI governing authorities to conduct “market surveillance” of AI utilization and enforce strict AI requirements accordingly. The EU’s consumer-oriented approach to AI regulation mirrors the scope and breadth of other recent technology-based EU regulations, such as the privacy-centric General Data Privacy Regulation (GDPR), and stands in contrast to the business friendly bent of the “AI Action Plan.”
As further evidence of the U.S.’ divergent approach from the EU, the Trump administration will push Congress to revisit the AI “moratorium” stricken from this year’s budget reconciliation bill. That measure would have prevented states from enforcing AI-specific laws for 10 years, essentially eviscerating consumer-oriented AI laws in California, Colorado, Texas, and Utah, as well as pending AI legislation in dozens of other states. In response to that effort, a bipartisan group of 40 state attorneys general sent a letter to Congress opposing the measure, citing state sovereignty and their duty to protect consumers. It is unclear whether a revived federal moratorium push would succeed given previous bipartisan Congressional opposition to the provision. But notably, several state AGs have signaled that they will enforce existing, non-AI specific privacy, consumer protection, and anti-discrimination laws as they relate to AI use, regardless of whether federal law ultimately supersedes state, AI-specific laws.
Considering the above, developers and deployers of AI must take note of the AI Action Plan and position themselves accordingly. Whether navigating export control compliance, meeting federal procurement standards for AI objectivity and transparency, or reducing innovation roadblocks, companies must heed the plan’s mandates to ensure compliance and capitalize on emerging opportunities. Yet, businesses must also continue to adhere to a patchwork of state AI regulation and existing consumer protection laws, often with complex and varied requirements, in the absence of a federal AI provision that specifically preempts state regulation. Engaging relevant internal stakeholders and consulting experienced outside counsel will help mitigate regulatory and other legal exposure within an ever-changing AI landscape.
*Senior Government Relations Manager