What Businesses Need to Know: Colorado’s First-in-the-Nation AI Act Diverges From Federal Approach to Disparate Impact

Troutman Pepper Locke

[co-author: Stephanie Kozol]*

What Happened

Last week, Colorado lawmakers held a special session that culminated in a decision to delay the implementation of the Colorado Artificial Intelligence Act (CAIA) until June 30, 2026, extending the timeline beyond its original February 2026 start date. That delay gives businesses a brief window to prepare, but the law remains in effect, requiring companies to build governance programs and perform regular impact assessments of high-risk AI systems.

With CAIA still slated to take effect next year, two very different approaches to AI liability are emerging. At the federal level, Executive Order 14281 directs agencies to abandon disparate impact analysis and limit liability to intentional discrimination, as detailed below. As the first comprehensive AI regulation at the state level, CAIA represents an entirely different approach, making companies responsible for disparate impacts even when there is no evidence of intent to discriminate.

For businesses, this creates a divided regulatory landscape. One path expands liability and imposes proactive compliance obligations, while the other narrows liability and reduces oversight requirements. The challenge is to design or deploy an AI governance approach that complies with these starkly contrasting standards.

I. CAIA: Liability for Unintentional Discrimination

CAIA establishes liability for both developers and deployers of AI if their systems produce discriminatory outcomes, even without intent. The act defines algorithmic discrimination to include disparate impacts caused by AI in consequential decisions, and it classifies as “high-risk” those systems that influence decisions in areas such as employment, housing, credit, education, health care, insurance, legal services, and essential government services.

The Role of Impact Assessments Under CAIA

“Impact assessments” are a central component of CAIA’s new regulatory requirements for AI in the state. Deployers of these high-risk AI systems must complete an impact assessment before the system is first used, repeat the assessment at least annually, and conduct a new one within 90 days of any substantial modification, such as a retraining the model or making a major change in inputs or outputs.

Each impact assessment must include:

  • A description of the system’s purpose, intended use, and context of deployment;
  • The categories of input data and the nature of the system’s outputs;
  • An overview of the categories of data that the deployer used to customize or retrain the system.
  • The performance metrics used to evaluate accuracy and fairness, along with known limitations;
  • An analysis of potential risks of algorithmic discrimination;
  • The steps taken to mitigate those risks;
  • The transparency and oversight measures in place, including whether consumers are informed of AI use; and
  • Post-deployment monitoring procedures to catch issues as the system operates.

Impact assessments must be documented and retained for at least three years. As noted above, these assessments create an ongoing obligation for companies to continually test and validate the fairness of their systems in order to prevent disparate impacts from occurring.

CAIA’s Safe Harbors and Enforcement

CAIA provides a form of safe harbor for companies that meet its requirements. Businesses that maintain a risk management program and complete the required impact assessments receive a rebuttable presumption of compliance. In addition, an affirmative defense is available if a violation is discovered and cured while the company is following a recognized risk framework.

Enforcement authority rests with the Colorado Attorney General (AG), and deployers must notify the AG within 90 days of discovering algorithmic discrimination. The effect is that while liability under CAIA extends to unintentional discrimination, companies that perform and document robust assessments and governance programs will have significant legal protections if problems arise.

II. Federal Approach: Liability for Intentional Discrimination Only

In April 2025, President Trump issued Executive Order 14281, titled “Restoring Equality of Opportunity and Meritocracy.” The order directs federal agencies to abandon disparate impact analysis in rulemaking and enforcement. Under this approach, liability is limited to intentional discrimination. Disparate outcomes without intent are not actionable, and agencies such as the EEOC, HUD, and CFPB will no longer require or expect disparate impact testing or documentation.

For businesses, this shift reduces the compliance burdens associated with federal oversight. Agencies are less likely to investigate AI systems based solely on the outcomes that they produce. While this should limit the risk of liability when deploying AI, the federal rollback does not eliminate risk altogether. Private plaintiffs may still pursue disparate impact claims under federal statutes such as Title VII of the Civil Rights Act or the Fair Housing Act. State-level enforcement under CAIA and similar laws will also continue regardless of federal policy.

III. Practical Implications of State and Federal Divergence

Colorado has tied liability to disparate impact and made recurring impact assessments the centerpiece of compliance. The federal government has gone the other way, abandoning disparate impact analysis and narrowing liability to intentional discrimination. The result is a divided regulatory landscape. Companies that operate nationally will need to reconcile these two systems: expansive state-level liability built around impact assessments and reduced federal oversight.

For businesses, the divergence creates a two-track compliance environment. Federal regulators are signaling that companies need not test or document disparate impacts, while Colorado requires ongoing assessments designed to uncover and mitigate them. Companies that focus exclusively on federal standards risk liability in Colorado and in other states that may follow its lead. By contrast, businesses that structure governance programs around CAIA’s higher standard (including annual assessments, consumer disclosures, and reporting protocols) will be positioned to satisfy both regimes.

It is important to recognize that CAIA does not just impose obligations. CAIA also builds in safe harbor protections for those businesses that comply. Companies that perform impact assessments and maintain a risk management program receive a rebuttable presumption of compliance. Those that discover and cure problems while following a recognized risk framework have an affirmative defense. These protections mean that compliance is not simply a matter of avoiding penalties, but a way to secure a modicum of legal insulation if issues arise.

Conclusion

Businesses now face two starkly different approaches to AI regulation. At the state level, companies may be held responsible for unintentional discrimination unless they proactively comply with CAIA’s requirements, most notably by conducting detailed, recurring impact assessments. At the federal level, liability is limited to intentional discrimination, with disparate impact analysis abandoned as a regulatory standard under many circumstances.

CAIA also makes clear that companies have a path to protect themselves. By maintaining risk management programs, performing regular impact assessments, and documenting mitigation steps, businesses gain access to safe harbors in the form of a rebuttable presumption of compliance and an affirmative defense if they discover and cure problems.

As more states consider enacting regulatory frameworks similar to CAIA, the patchwork is likely to expand. Companies that undertake to design risk management programs, perform thorough impact assessments, and calibrate disclosures will be best positioned not only to navigate these diverging systems, but also to make full use of the safe harbors that CAIA provides.

*Senior Government Relations Manager

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Troutman Pepper Locke

Written by:

Troutman Pepper Locke
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Troutman Pepper Locke on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide