California’s New AI Employment Regulations Are Set To Go Into Effect On October 1, 2025

Proskauer - California Employment Law

 

The California Civil Rights Council, which promulgates regulations that implement California’s civil rights laws, has published a new set of regulations concerning artificial intelligence (“AI”) in the workplace. These new rules (available here) are set to go into effect on October 1, 2025 and amend the existing regulatory framework of the Fair Employment and Housing Act (“FEHA”). This latest round of regulations is continuing a trend of California policing AI in the workplace, as we have previously reported here and here.   

According to the Civil Rights Department, these regulations are needed because “[a]utomated-decision systems — which may rely on algorithms or artificial intelligence — are increasingly used in employment settings to facilitate a wide range of decisions related to job applicants or employees, including with respect to recruitment, hiring, and promotion … [and] can also exacerbate existing biases and contribute to discriminatory outcomes.” Such “automated-decision systems” are defined as computational processes that make a decision or facilitate human decision-making regarding an employment benefit, which may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.

These regulations attempt to clarify the application of existing anti-discrimination employment laws (i.e., FEHA) in the context of AI. Among other changes, the regulations:

  • Broadly define an “agent” of an employer, such as companies hired to recruit and screen applicants, to be an “employer” under the FEHA.
  • Require employers to keep records of their automated decision systems data (such as data provided by or about individual applicants or employees, or data reflecting employment decisions or outcomes) for at least four years.
  • Affirm that automated-decision system assessments, including tests, questions, or puzzle games that elicit information about a disability, may constitute an unlawful medical inquiry.
  • Specify that it is unlawful for an employer or other covered entity (e.g., an agent) to use an automated-decision system or selection criteria that discriminates against an applicant or employee or a class of applicants or employees on a basis protected by the FEHA, such as gender, race, or disability.
  • Provide that an employer’s anti-bias testing (or lack thereof) and any response to the results of such testing, and other similar proactive efforts to avoid unlawful discrimination, are “relevant” to an employer’s defense against such claims.

Thus, with limited exceptions (such as the definition of “agent” and new recordkeeping requirements), the regulations are largely declarative of existing law, applied to new technologies. In other words, these regulations make clear that in cases of alleged disparate impacts against protected classes, it will be no defense to say, “the AI did it.”

We will continue to monitor how California applies anti-discrimination laws to the use of AI in employment decisions.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Proskauer - California Employment Law

Written by:

Proskauer - California Employment Law
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Proskauer - California Employment Law on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide