Starting October 1, 2025, California’s Civil Rights Department (CRD) will roll out new regulations on Automated Decision-Making Systems (ADMS). If your reaction is “What in the heck is that?”—congratulations, you're in the majority.
Let us translate: This is California’s polite (but firm) way of telling employers, “Just because a computer is doing it doesn’t mean you’re off the hook for discrimination.”
So, what’s an Automated Decision-Making System, anyway?
In the simplest terms, it’s any software or algorithm that helps make employment decisions. Think résumé screeners, video interview analysis tools, productivity trackers, or anything with a whiff of AI that ranks, scores, or recommends people. Basically, if you're an employer who is outsourcing your hiring decisions to something that ends in “.exe,” this applies to you.
And yes, even if that shiny AI tool came with a slick sales pitch about “bias-free decision-making,” the law knows better. Machines learn from people, and people are often biased. Just ask Grok.
What the regulation actually does
The new rule says that if you, as an employer, are using ADMS, you now have affirmative obligations under California’s civil rights laws to make sure your systems aren’t perpetuating discrimination based on race, gender, age, disability, and all the other categories protected from discrimination under the Fair Employment and Housing Act.
It doesn’t matter if an employer built the tool in-house, licensed it from a “reputable” vendor, or borrowed it from your cousin who’s “really into machine learning.” If the tool is influencing who gets interviewed, hired, promoted, or even surveilled at work, the employer is responsible.
Transparency is no longer optional
Applicants and employees have the right to know they’re being assessed by something other than a human, so employers must now notify applicants and employees when ADMS are being used. If they ask how it works, the employer is expected to give a meaningful explanation and not hide behind the “it’s proprietary” wall. And, if someone with a disability wants to opt out of being evaluated by the robot overlord as an accommodation, then they are entitled to it as a reasonable accommodation.
Discrimination is still discrimination—even when it’s digital
The CRD’s point is crystal clear: whether discrimination comes from a racist manager or a poorly trained AI model, it’s illegal all the same. So, if you're going to use AI or algorithms in employment decisions, you better understand them, disclose them, and make sure they don’t violate the law. No more “I didn’t know” or “The vendor said it was compliant.”
Bottom line
- Decide if you will be using ADMS.
- Select your ADMS vendor or platform based on evidence that it will make all the legally required adjustments to comply with the new law.
- Train a human team to audit the process at initial use and then periodically to ensure it is not slipping back into its mathematically supported discriminatory practices.
- Prepare a summary of the type of information sought by the ADMS and the manner in which it will be used.
- Buckle up.
Because come October 1, ignorance won’t just be bad policy—it’ll be illegal.