Five Things to Consider When Designing an AI Governance Program

Stikeman Elliott LLP
Contact

Stikeman Elliott LLP

Although Artificial intelligence (“AI”) has been a matter of intense public interest for several years, few substantive laws regulate its use. Reasons for this include a lack of clarity about the scope of what needs to be regulated, as well as the fact that many of the harms that AI can potentially cause fall under existing law. The AI legislation, guidelines, and frameworks that do exist – such as the European AI Act, or the National Institute for Standards and Technology’s (“NIST”) “Risk Management Framework” (“RMF”) – have adopted a common regulatory approach founded on governance and specifically on transparency and risk mitigation. In response to this, many organizations are implementing AI governance programs. In this post we look at some of the key considerations that go into the design of such programs.

Five Steps to Good AI Governance

1. Establish a governance team and define its mandate

A successful AI governance program starts with an AI governance team. This team should be responsible for making recommendations to the organization’s governing body respecting:

  • the legitimate uses of AI within the organization;
  • The measures required to ensure these uses are respected by stakeholders within and outside the organization; and
  • the processes required to continuously monitor AI, applicable legislation, and organizational responses to both.

In addition to members of the IT department, the AI team should include representatives from departments – such as human resources, finance and marketing – that frequently use tools that include or are driven by AI. The team should also include members from procurement, risk management and the legal department as these employees are responsible for monitoring and enforcement. Although a broader team may at first seem daunting to manage, it will make for smoother adoption and implementation throughout the organization.

2. Define what AI means for your organization

Because AI can refer to different technologies and is frequently used as much to market as it is to describe a product, it is important to define what is meant by AI and what role it should play within the organization. The NIST RFM defines AI as:

an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments.

This definition is broad and can include anything from a sophisticated grammar-check tool to a program that can design code to solve a problem that the system has trained itself to identify and solve. While the NIST definition may not entirely capture the type of technology that an organization is intending to regulate, it is a starting point.

Defining what AI means for your organization, however, is more than simply describing the technology. It means identifying the AI that is appropriate for the culture of an organization as well as the uses for which the AI is intended. For example, particular generative AI tools should not be trained using data protected by strict confidentiality undertakings. It is therefore important to identify the tools that are appropriate for a particular organization’s culture and profile and develop a program around these.

3. Identify the laws that apply and know your “danger zones”

The greatest challenge in an AI governance program is to stay on top of the applicable legislation. As stated above, there are few comprehensive AI laws. Many jurisdictions have opted for guidelines or sector-specific regulations. Additionally, when identifying the applicable laws, it is important to understand not only the regulatory landscape but also the way the technology functions (or can malfunction), as this is where many of the risks lie. The most frequently cited danger zones include intellectual property violations, misuse of data and personal information, discrimination and human rights violations, civil liability and negligence, and criminal law violations. Organizations must also consider disclosure duties imposed by regulatory bodies such as securities regulators.

4. Build a program

A successful AI governance program should include:

  • Policies, such as an employee AI use policy, an AI governance policy describing how (and by whom) AI is vetted, and a supplier vetting policy;
  • Employee training, monitoring, and testing; and
  • Continuous improvement.

The program should include tools such as algorithmic impact assessments to measure the risk of various technologies and use cases. It should also include a register of the AI tools that have been deployed throughout the organization.

5. Review your MSAs

Regardless of whether your organization is purchasing AI from or selling its own AI to a third party, make sure the master services agreement reflects both parties’ intentions. Some questions to consider are: (i) ownership of data input and output; (ii) ownership of intellectual property related to the codes and algorithms, the material generated by the AI, and the training data; (iii) service levels and metrics; (iv) allocation of liability; and (v) termination including the ability to terminate if applicable legislation renders the use of the AI illegal.

Conclusion

Like any governance program, the implementation of an AI governance program is as much about knowing the applicable law as it is about understanding the organization’s culture and technological needs. Also, given the relative novelty of AI and the rate at which the technology and regulatory landscape are evolving, it is a program that requires not only continuous monitoring and improvement, but engagement from each and every organizational unit.

[View source.]

Written by:

Stikeman Elliott LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Stikeman Elliott LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide