In one of his first executive actions after retaking the White House, President Donald Trump repealed a 2023 Biden-era executive order that imposed requirements on the use of artificial intelligence (AI). It marked the first volley in recent Republican-led efforts to curtail regulations on AI, which came to a head last week in negotiations regarding Trump’s One Big Beautiful Bill Act. Yet, despite these efforts, on July 1, 2025, the U.S. Senate ultimately voted 99-1 to strike from that bill a moratorium on AI laws. The moratorium would have prevented state and municipal governments from enforcing a surge of new laws cracking down on uses of AI technology that may facilitate discrimination.
Now, with the moratorium removed from the final version of the One Big Beautiful Bill Act that Trump signed into law, employers must contend with AI-related legislation sprouting in jurisdictions across the country. For example, California recently passed a series of laws regulating AI, and beginning in January 2026, will require vendors of generative AI tools to publicly post the data on which their AI systems were trained. And Colorado’s new law, set to take effect a month after California’s, focuses on the use of AI in employment decisions such as hiring, firing and promotions. In particular, the Colorado law requires employers to create a risk management policy that complies with specific criteria set forth in the statute regarding their implementation and monitoring of AI. Moreover, purportedly in an effort to curb "algorithmic discrimination," which includes both intentional discrimination and disparate impact, the Colorado law imposes civil liability on employers who violate these requirements. Similarly, Illinois’ new law prohibits employers from deploying AI that discriminates based on protected classes. In addition, a bill passed by the New York Senate and another proposed in Connecticut would require audits of AI systems used in employment decisions. This latest wave of impending and potential laws follows earlier requirements that took effect in 2023 in New York City mandating audits of AI tools used in employment decisions.
The widespread state legislative action aims to combat two potential, interconnected pitfalls of AI. First, AI tools can suffer from a lack of transparency. Due to their complexity, AI systems may generate decisions whose reasoning their programmers cannot explain; this is known as the "black box" problem. Second, AI-driven outputs can replicate biases in the data on which they are trained. Programmers typically "train" AI by feeding it data that the AI will seek to analyze. This training data can reflect biased human decisions or mirror historical inequities, leading to the program’s decisions having a disparate impact against candidates or employees with protected characteristics. Even when programmers ensure that training data does not include protected characteristics, the data may contain proxies for such characteristics. Because employment decisions are subject to the existing framework of anti-discrimination laws (such as Title VII, the ADA, and the ADEA, among others), a black box algorithm based on biased data can lead an employer to unknowingly violate these federal laws and similar state laws.
Two cases highlight the considerable risk that AI technologies can pose to employers. In Mobley v. Workday Inc., a graduate of a historically Black college alleged that Workday, a major vendor of AI applicant screening software, discriminated against him based on age, race and disability with respect to applications he submitted to employers who used the company’s software. 740 F. Supp. 3d 796, 801 (N.D. Cal. 2024). In particular, the plaintiff in Mobley alleged the vendor trained the AI using data from the employer’s existing workforce, which purportedly led to the AI replicating biases as to age, race and disability. Less than a year before the Mobley decision, a virtual tutoring company entered a $365,000 settlement with the Equal Employment Opportunity Commission following claims that the company’s AI recruitment tool screened out candidates based on age.
Given the recent developments in state legislatures and courtrooms, organizations that rely on AI tools in employment decisions (including organizations that do not know whether their vendors or other software programs may utilize AI) should take the following steps to ensure that they comply with state and local AI requirements:
- Audit each tool used in the process of hiring, determining the terms and conditions of employees’ work, deciding whether to give employees promotions, and deciding if employees should be terminated.
- Update policies to ensure that, when applicable, the organization is properly disclosing its use of AI as well as the required information about the AI system.
- Update policies to audit the AI tools at the required intervals to ensure that they are not reflecting or amplifying unlawful discrimination.
- Train human resources teams, hiring managers, and decision-makers with respect to promotions and terminations on the requirements under the new laws.