This update is part of our AI FAQ Series. Learn more at our AI Law Center.
1. What is AI governance?
AI governance refers to the policies, processes and practices created by organizations to guide the development, deployment and use of AI technologies within an organization's operations. An AI governance framework involves oversight and decision-making across various aspects of AI deployment, including data usage, model development, legal and ethical considerations, and operational impact.
Key aspects of AI governance include ethical standards, data management, risk assessment, accountability and oversight, transparency, compliance with appropriate laws and regulations, continuous monitoring and improvement, and employee training and awareness.
The goal of AI governance is to help ensure that AI technologies are used ethically and responsibly by organizations while mitigating risks and ensuring compliance with appropriate laws.
2. What are the benefits of AI governance?
AI governance helps businesses harness the power of AI responsibly, driving innovation and growth while safeguarding against potential risks and ethical concerns. By implementing a robust AI policy and AI governance framework, businesses can not only mitigate risks but also build trust with customers, enhance their reputation and drive responsible innovation.
3. Do I need AI governance?
An AI governance strategy – involving people, policies and processes – is critical for any organization engaging with AI technologies to ensure that AI systems are developed and used in a responsible, ethical and compliant manner.
4. Do I need an internal or external AI policy?
Businesses should consider instituting one or more AI policies for various audiences to establish clear guidelines and standards for the development, deployment and use of AI technologies. Clear AI policies help protect the organization from legal and reputational risks, while fostering trust with users, stakeholders and regulators. AI policies can also play a role in establishing external awareness of AI-related requirements among service providers of a company, including contractors and vendors.
5. Do I need to train employees on the use of AI, and what should that training include?
Yes. As a best practice, employees should be trained on a regular basis on the use of AI to ensure competent and ethical application. Training should cover the capabilities and limitations of AI, data privacy and security practices, the scope and requirements of internal policies, and how to interpret AI‑generated insights or results. Additionally, training should include guidance on how to avoid and identify bias in AI systems. More generally, AI literacy is a good risk mitigation strategy.
6. How should I revise my data retention policies to account for the use of AI?
To adapt your data retention policies for AI use, focus on data minimization and secure data management. Establish clear guidelines for the duration of data storage, ensuring sensitive information is protected and unnecessary data is promptly and securely deleted. If the data includes personal data, specific rules may be imposed by applicable legislation. Consider leveraging technology solutions for managing retention periods and ensure compliance with relevant privacy laws by incorporating regular audits and transparency in your AI operations. Also consider what records are being created and/or maintained in connection with AI systems, tools or platforms being used, and ensure these meet appropriate standards. Keep in mind that such records—including, for example, AI searches—may be recoverable or discoverable in the future. Additionally, prioritize educating teams on the importance of these policies and how to implement them.
[View source.]