Seyfarth Synopsis: On July 23, 2025, the White House released “America’s AI Action Plan” and President Trump signed three Executive Orders addressing AI development, federal procurement, and infrastructure. The 25-page AI Action Plan focuses on bolstering American AI dominance through deregulation, the promotion of ideologically neutral AI systems, infrastructure investment, and international competition. The AI Action Plan and accompanying Executive Orders, along with President Trump’s signing remarks, reflect the Administration’s deregulatory approach to artificial intelligence and its desire to accelerate American AI development under a “Build Baby Build!” banner to achieve “global AI dominance.” For labor and employment practitioners, the most significant developments are the Plan’s measured approach to state AI regulation and its directive to revise NIST’s AI Risk Management Framework to eliminate references to Diversity, Equity and Inclusion and other concepts. Finally, despite its broad framing, the Executive Order “Preventing Woke AI in the Federal Government” has a relatively narrow immediate reach given its focus on federal AI procurement practices.
The AI Action Plan was issued pursuant to EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” signed by President Trump on January 23, 2025. EO 14179 specifically called for the removal of regulatory barriers that impede AI innovation and directed the development of the AI Action Plan to achieve the policy goal of sustaining and enhancing “America’s global AI dominance” to promote “human flourishing, economic competitiveness, and national security.” In developing the AI Action Plan, the White House solicited extensive public input and received over 10,000 comments from academia, industry groups, private sector organizations, and government entities.
Organized around three pillars—accelerating innovation, building American AI infrastructure, and leading in international diplomacy and security—the Plan sets out wide-ranging policy actions across areas such as permitting, export controls, procurement, and workforce development. It emphasizes a deregulatory, pro-innovation approach, directing agencies to streamline deployment, avoid “ideological” AI systems, and consider state AI regulation when awarding federal funds. The Plan reflects a broader effort to centralize AI governance at the federal level and limit perceived regulatory overreach by states.
For labor and employment practitioners, the most significant developments are the Plan’s measured approach to state AI regulation preemption and the directive to revise NIST’s AI Risk Management Framework to eliminate references to Diversity, Equity and Inclusion and other concepts—both of which we discuss in detail below.
It is also important to note what the AI Action Plan does not explicitly address: it does not discuss disparate impact, employment selection procedures, or other employment-specific uses of AI. While the Plan includes pointed language around the role of ideology in AI systems, the Plan’s directives are largely focused on how the federal government procures and uses AI technologies, federal research and funding priorities, and how to attain broader economic competitiveness goals.
Key Components of the AI Action Plan
Pillar I: Accelerating Innovation
The Plan’s first pillar focuses on removing regulatory barriers and promoting rapid AI adoption across federal agencies and the broader economy. It directs federal agencies to identify and eliminate rules that may slow the deployment of AI technologies. Worker-related provisions in this pillar focus primarily on workforce development and retraining, with an emphasis on ensuring that AI complements rather than replaces human labor.
The Plan establishes an “AI Workforce Research Hub” under the Department of Labor to lead a sustained federal effort to evaluate AI’s impact on the labor market, including recurring analysis, scenario planning, and actionable insights for workforce and education policy. It also directs the Department of Labor to prioritize AI skills development as a core objective of education and workforce funding streams, including career and technical education, apprenticeships, and other federally supported skills initiatives, and to fund rapid retraining for individuals impacted by AI-related job displacement.
As Deputy Secretary of Labor (and former EEOC Commissioner known for his prior work in AI regulation) Keith Sonderling noted in his July 23 LinkedIn post applauding the Action Plan, “The U.S. Department of Labor believes AI represents a new frontier of opportunity for workers, but to realize its full promise, we must equip Americans with AI skills, build talent pipelines for AI infrastructure, and develop the agility in our workforce system to evolve alongside advances in AI.”
Pillar II: Building AI Infrastructure
The second pillar of the Plan addresses the physical and human infrastructure needed to support AI development and deployment. For instance, the infrastructure pillar emphasizes workforce development for AI infrastructure roles, including training programs for electricians, advanced HVAC technicians, data center operators, and other high-paying occupations essential to the AI infrastructure buildout.
Specifically, the plan directs the Department of Labor to create a national initiative identifying high-priority occupations critical to AI infrastructure and to partner with state and local governments to support industry-driven training programs for these priority occupations. These efforts include expanding Registered Apprenticeships for occupations critical to AI infrastructure, updating career and technical education programs, and creating early career exposure programs and pre-apprenticeships for middle and high school students.
The goal is to ensure that American workers gain from the opportunities created by AI infrastructure development.
Pillar III: Leading in International AI Diplomacy and Security
The final pillar outlines a global strategy to promote American AI standards and reduce reliance on adversarial technologies. It includes a federal initiative to export full-stack AI solutions to allied nations, encompassing hardware, software, and cybersecurity components. The plan directs the Department of Commerce, through NIST, to evaluate Chinese frontier models for alignment with Chinese Communist Party narratives. This provision is part of a broader national security posture and does not directly affect domestic commercial or employment-related AI use.
The Door Remains Open to Federal Preemption of State AI Laws
One issue many employment practitioners have been closely tracking is the potential for federal action that would limit the growing patchwork of state and local AI laws. Earlier this year, the “One Big Beautiful Bill Act” (H.R. 1) originally included provisions for a 10-year moratorium on state and local AI regulation, but those provisions were ultimately not included in the version of the bill signed by President Trump. Still, the concept of federal preemption over AI remains a live issue.
The Trump Administration’s AI Action Plan and related Executive Orders signal a strong preference for centralized, uniform federal regulation, and an explicit effort to curtail what the Administration characterizes as overreach by states and municipalities. As a result, employers should expect continued federal efforts to block or override state-level AI requirements in favor of a national approach.
In his January 23, 2025 remarks made announcing the AI Action Plan and signing the accompanying Executive Orders, President Trump emphasized the need for a “common sense federal standard that supersedes all states,” and repeatedly called for uniform federal standards regarding AI.
The President’s remarks underscored the operational challenges posed by a patchwork state AI regulations “If you are operating under 50 different sets of state laws, the most restrictive state of all will be the one that rules,” President Trump explained, warning that “you can’t have one state holding you up; you can’t have three or four states holding you up.” To address this concern, he called for a federal standard “so you don't end up in litigation with 43 states at one time.”
The AI Action Plan translates these views into specific policy directives, albeit in a more-measured way. The Plan directs OMB to “work with Federal agencies that have AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” This approach leverages the federal government’s spending power to influence state action, rather than attempting to invoke direct federal preemption. This approach leaves the door open for agencies to create financial incentives encouraging state alignment with federal AI priorities.
In a more assertive move, the Plan also directs the FCC to “evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.” This directive signals that the FCC may assert that certain state laws or regulations are preempted based on the FCC’s existing statutory authority over interstate communications. Importantly, the Plan also acknowledges that “the Federal government should not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation,” creating room for what the Administration might consider “good” versus “bad” state AI regulation.
The key takeaway is that while a national standard favoring “deregulation” appears to remain a priority for President Trump and his administration, the directives in the AI Action Plan do not specifically call for federal preemption of state AI laws and regulations. The practical impact of the directives in the AI Action Plan will depend heavily on how aggressively federal agencies, especially the FCC, implement the Plan’s directives, and whether the Administration pursues additional efforts to discourage or preempt state AI regulation.
The AI Action Plan: Removing DEI from the NIST Risk Management Framework
The AI Action Plan directs the National Institute of Standards and Technology (NIST) to “revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” In the employment context, the directive requires careful attention to assess its practical implications.
Version 1.0 of the NIST AI Risk Management Framework, released in January 2023, has become one of the leading voluntary guidance documents for managing AI-related risk. Importantly, the NIST AI RMF currently is, and is likely to remain, a voluntary framework. It does not impose binding legal requirements onto private businesses, nor does the AI Action Plan seek to change it into any sort of mandatory compliance framework. Indeed, NIST is featured prominently throughout the AI Action Plan and will continue its well-established role as a critically important agency in shaping federal AI standards, evaluations, and guidance.
The Administration’s directive to remove references to DEI from the AI RMF is consistent with its broader philosophical perspective relating to DEI. However, it is important to keep in mind that removing these concepts from the NIST AI RMF does not change the underlying legal framework governing the use of AI in employment. The legal requirements under Title VII and other employment discrimination statutes remain unchanged, regardless of whether the NIST framework addresses bias considerations. As we noted in our April 24, 2025 Management Alert, disparate impact liability was codified into law by Congress in 1991, and private litigants retain the right to bring disparate impact claims under Title VII regardless of federal enforcement priorities or guidance frameworks.
Interestingly, despite the revisions to the NIST AI Risk Management Framework, the Action Plan supports the opportunity to “Build an AI Evaluation Ecosystem” in recognition that “rigorous evaluations” can be a “critical tool in defining and measuring AI reliability and performance in regulated industries.” Led by NIST and the Center for AI Standards and Innovation, this effort is intended to improve how federal agencies—and potentially the broader AI industry—assess AI reliability and performance. While it is not clear how broadly the AI Evaluation Ecosystem may apply, at the very least, it signals a recognition of the importance of developing an infrastructure for assessing AI reliability and performance. The practical impact of these changes will unfold over time, as NIST implements the changes directed by the Action Plan.
July 23rd AI Executive Orders
On July 23, President Trump also signed three executive orders regarding artificial intelligence.[1] The Order regarding “Preventing Woke AI in the Federal Government” has generated significant media attention. Despite its framing, the Order’s immediate reach is relatively narrow, focusing primarily on federal AI procurement practices which do not impose direct restrictions on private-sector AI use. While federal procurement standards have historically influenced broader industry norms, we predict the Order’s near-term practical impact in the employment context will be limited.
Practitioners should also understand that the Order’s binding requirements only apply to large language models (LLMs) being procured by federal agencies. However, Order also directs that eventual implementing guidance must also “specify factors for agency heads to consider in determining whether to apply the Unbiased AI Principles to LLMs developed by the agencies and to AI models other than LLMs.”
The Order directs federal agencies to procure LLMs developed in accordance with “Unbiased AI Principles” which are defined to be grounded in two specified principles: “truth-seeking” —prioritizing historical accuracy, scientific inquiry, objectivity, and acknowledgment of uncertainty—and “ideologically neutral,” meaning free from partisan or ideological influence such as DEI concepts.
The Order also refers DEI as “one of the most pervasive and destructive” ideologies that in the AI context includes “the suppression or distortion of factual information about race or sex; manipulation of racial or sexual representation in model outputs; incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism; and discrimination on the basis of race or sex.”
As examples of problematic AI behavior, the Order cites instances where “one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy” and where “an AI model asserted that a user should not “misgender” another person even if necessary to stop a nuclear apocalypse.”
As noted, this Order applies exclusively to the federal government’s procurement of AI systems. While it sets forth sweeping and broad requirements about what kind of AI the federal government may purchase and deploy, it does not purport to restrict what AI companies can offer in the commercial marketplace, impose requirements on private employers’ AI use, or create new legal obligations for AI developers serving non-government customers.
Importantly, though, federal procurement standards have historically influenced broader market practices over time. American tech companies seeking to maintain their eligibility to sell their AI models to the government may adjust their development practices generally, to align with the Order’s directives and the Office of Management and Budget’s implementing guidance, potentially affecting the way that AI tools for the private sector are developed and sold.
For now, private employers using AI systems for decisions relating to labor and employment face no immediate regulatory changes from this Order. However, employers should monitor the OMB’s implementing guidance, directed to be issued in 120 days, and its effect on the broader AI market and development practices, as federal procurement requirements may influence the features and capabilities available in commercial AI products over time.
Keeping Sight of the Big Picture
The AI Action Plan and Executive Orders signed on July 23 continue the Trump Administration’s broader philosophical shift towards a deregulatory approach focused on American competitiveness.
Employers should view these actions as an indication of longer-term market and regulatory trends. While these actions do not present immediate compliance changes for employment practitioners, the Administration’s approach seeks to influence the broader AI market and AI vendor practices. These long-term shifts will not eliminate the need for careful AI risk management in the employment context, and employment practitioners should continue to be mindful that underlying laws prohibiting employment discrimination remain unchanged.
[1] In addition to signing an Executive Order relating to “Woke AI,” President Trump also signed two additional Executive Orders that have less direct relevance for employment practitioners but underscore the Administration's broader AI strategy. The “Promoting the Export of the American AI Technology Stack” Order establishes an American AI Exports Program to support deployment of U.S.-origin AI technologies globally. This initiative aims to strengthen America’s AI market position internationally but does not directly affect domestic employment AI applications. The “Accelerating Federal Permitting of Data Center Infrastructure” Order streamlines environmental and permitting processes for AI data centers requiring more than 100 megawatts of power. While this order facilitates the infrastructure needed for AI development, it primarily addresses regulatory and environmental policy and permitting.