DOJ to Evaluate AI Compliance Programs

Paul Hastings LLP
Contact

Paul Hastings LLP

The Department of Justice (DOJ) recently raised the stakes for businesses under investigation who use artificial intelligence (AI). The Evaluation of Corporate Compliance Program (ECCP) outlines the criteria to be considered by Federal prosecutors in determining how effective organizations’ compliance programs are and deciding whether to pursue legal action against them. With the updated ECCP, investigators will now assess how businesses manage artificial intelligence risks when evaluating corporate compliance programs.

Deputy Attorney General Lisa Monaco explained the basis for inclusion, noting that “Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences—for individual and corporate defendants alike.” She further explained that the Criminal Division will incorporate “assessment of disruptive technology risks—including risks associated with AI—into its guidance on Evaluation of Corporate Compliance Programs.”

What artificial intelligence is in scope?

The artificial intelligence to be evaluated is wide in scope and includes machine learning, reinforcement learning, transfer learning, and generative AI: “no system should be considered too simple to qualify as a covered AI system due to a lack of technical complexity.”

What should businesses evaluate?

The risk management areas to be evaluated include:

  • How assessment of AI risk is done in conjunction with the enterprise risk management program
  • Whether policies and procedures give both content and effect to ethical norms and to mitigate risks identified by the company as part of its risk assessment process
  • How organizations conduct training on AI usage for all directors, officers, relevant employees, and, where appropriate, agents and business partners

This criteria comes at a time where there is increased activity with AI Regulation. The EU AI Act was signed into law in March 2024 and will be going into effect over the next 2 years, requiring “deployers” of AI systems to provide notice of AI and regularly assess activities depending on level of risk. Several states, most notably California and Colorado, have passed AI bills into law that require disclosure and assessment of AI activities. President Biden signed an Executive Order last year directing a number of Federal agencies to take action on AI for industries in their purview.

As organizations continue to navigate growing use of AI and regulations, companies should understand:

  • How they are using AI
  • The types of data they are collecting, processing, and generating as a result of AI, particularly around sensitive data
  • Controls they have in place to monitor this activity including human autonomy and bias prevention
  • The risks of such technology to the business

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Paul Hastings LLP

Written by:

Paul Hastings LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Paul Hastings LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide