The use of artificial intelligence in employment decision-making has continued to rise since its inception. Nowhere is this more evident than in hiring decisions. Employers have used AI to help screen résumés, schedule candidate interviews, and answer questions from applicants. But using AI in employment decision-making does not come without risks, as recent litigation trends continue to show.
Mobley v. Workday is the most recent example of the potential legal risks that may arise. Derek Mobley, the lead plaintiff in Mobley v. Workday, has alleged that Workday’s AI-based applicant screening tools discriminated against him based on a variety of protected characteristics, including his age. More specifically, Mobley alleged that Workday’s AI-based screening tools, which review and interpret an applicant’s qualifications for a position and can automatically reject the application based on such review, caused him to be rejected from more than 100 jobs for which he applied.
In May 2025, Mobley received a big victory from the California federal court overseeing his lawsuit when it determined that Mobley could move forward with his case as a collective action. Since this ruling by the court, nearly 100 individuals have filed consents to join, opt in, and become plaintiffs in the lawsuit. It is anticipated that more individuals will opt in to become plaintiffs in the collective action, and the Mobley court’s decision to permit the case to move forward as a collective action is likely to spur on similar litigation against other companies.
Mobley v. Workday is not the only lawsuit of its kind. For example, in August 2023, the EEOC settled a lawsuit for $365,000 against three integrated companies providing English-language tutoring services where the EEOC alleged that the companies’ tutor application software automatically rejected female applicants, applicants over the age of 55, and male applicants over the age of 60. See EEOC v. iTutorGroup, Inc., et al., Civil Action No. 1:22-cv-02565.
Between Mobley and the EEOC’s enforcement of anti-discrimination laws with respect to use of AI in employment decision-making, employers should be aware of the impact of their use of such tools to help mitigate legal risk. If employers are using AI tools for employment decision-making, or are contracting with vendors who use such tools, it is critical that these employers utilize auditing technology to ensure that their use of AI does not run afoul of the relevant anti-discrimination laws. Certain cities and states, including but not limited to Illinois, Maryland, and New York City, legally require such audits, again emphasizing their importance.
Employers should also consider creating written policies regarding how they will (and will not) use AI tools in employment decision-making, as well as how the employer will handle situations in which such tools do not align with the relevant anti-discrimination laws or other organizational goals. Each of these steps not only helps mitigate an employer’s legal risk when using AI tools in employment decision-making, but these steps also ensure that employers are not missing out on qualified candidates or otherwise making uninformed employment decisions due to shortfalls in the AI tools utilized by the employer.