Virginia Passes Second of Its Kind Ai Anti-Discrimination Statute, but Law Vetoed by Governor

Saul Ewing LLP
Contact

Saul Ewing LLP

The Virginia General Assembly recently passed a second of its kind AI anti-discrimination bill, HB 2094, which would have taken effect on July 1, 2026. However, on March 24, 2025, Gov. Glenn Youngkin vetoed the proposed law.

Although HB 2094 is no longer going into effect, businesses that utilize AI systems may still want to take note of the vetoed bill as we will likely continue to see more AI regulation across the country.  

On the heels of Colorado’s AI discrimination law, HB 2094 attempted to place obligations on AI “developers” and “deployers” in Virginia to “aid in the prevention and mitigation of algorithmic discrimination caused by the use of a high-risk artificial intelligence system.”

The bill defined a “Developer” as any business that offers, sells, leases, gives, or otherwise provides a high-risk artificial intelligence system to consumers, while a “Deployer” was defined as an organization that “deploys or uses a high-risk artificial intelligence system to make a consequential decision.”

“Consequential Decision” included any decision that had a material legal, or similarly significant, effect on the provision or denial to any consumer of (i) parole, probation, a pardon, or any other release from incarceration or court supervision; (ii) education enrollment or an education opportunity; (iii) access to employment; (iv) a financial or lending service; (v) access to health care services; (vi) housing; (vii) insurance; (viii) marital status; or (ix) a legal service.

In other words, HB 2094 sought to establish a duty of reasonable care for businesses deploying high-risk AI systems in the employment, financial services, and healthcare sectors. Thus, businesses in Virginia that utilize AI systems to make employment decisions, such as hirings and terminations, were likely to be covered by the law, if it passed. However, HB 2094 only applied to “high-risk” AI systems that operate without meaningful human oversight and serve as the principal basis for consequential decisions. AI tools that merely assist with business operations may not have qualified as “high-risk.”

HB 2094 also would have required businesses in Virginia to conduct impact assessments before deploying high-risk AI systems. These obligations included a statement disclosing the intended uses of the high-risk AI system, the intended purpose of the system, the metrics used to evaluate performance, and the known limitations of the AI system. HB 2094 also would have required that deployers implement a risk management policy designed to “identify, mitigate, and document” algorithmic discrimination in the consequential decision-making of these high-risk systems. Furthermore, when a consequential decision was made, the deployer would have needed to disclose to the consumer the purpose and nature of the AI system and the consequential decision, the deployer’s contact information, and a description of the AI system in plain language.

In vetoing HB 2094, Gov. Youngkin argued that the bill stifled progress and placed onerous burdens on Virginia’s business owners. Gov. Youngkin further argued that HB 2094 “would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology” in Virginia.

Takeaways

Virginia’s bill – and the subsequent veto – are notable in light of the increased popularity of AI. It remains to be seen whether other states will follow in Colorado and Virginia’s lead and put forth similar legislation to regulate the AI space. Given the current lack of regulation at the state and federal level, businesses should stay vigilant as to any developments in the AI space as the area of AI regulation continues to evolve.

Moreover, even though HB 2094 was vetoed, employers that utilize high-risk AI systems may still want to consider seeking expert guidance to audit their AI usage and to assess potential risk. It remains possible that Virginia’s legislature proposes a new, less stringent version of HB 2094. As a result, businesses that utilize high-risk AI systems may also want to consider developing impact assessments and analyzing whether there are any indicators of algorithmic discrimination.

Lastly, businesses may want to consider implementing the “Artificial Intelligence Risk Management Framework published by the National Institute of Standards and Technology, Standard ISO/IEC 42001 of the International Organization for Standardization, or another nationally or internationally recognized risk management framework for artificial intelligence systems”, because businesses that conform to these standards would have been presumed to be in conformity with the requirements of HB 2094.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Saul Ewing LLP

Written by:

Saul Ewing LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Saul Ewing LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide