Laws/Regulations directly regulating AI (the “AI Regulations”)
There are no comprehensive, binding laws or regulations specifically governing AI across the entire African Union (AU). However, the AU has shown interest in AI governance, recognizing its potential to drive socio-economic transformation. The Continental Artificial Intelligence Strategy1 ("Continental AI Strategy") was endorsed by the AU Executive Council in July 2024 and reflects this vision. The Continental AI Strategy promotes ethical, responsible, and equitable AI practices across the continent, aligning with Africa’s Agenda 2063 goals to accelerate social and economic transformation.
Status of the AI Regulations
The Continental AI Strategy includes a phased implementation plan from 2025 to 2030, beginning with preparatory activities in 2024. Phase I (2025-2026) is focused on creating governance frameworks, national AI strategies, resource mobilization, and capacity building. Phase II, starting in 2028, aims to implement core projects informed by a review in 2027. The Continental AI Strategy focuses on five areas: harnessing AI’s benefits, building capabilities, minimizing risks, fostering cooperation, and stimulating investment.
Resource mobilization involves investing in and collaborating with partners to finance and support AI initiatives, with a focus on developing broadband connectivity, enhancing data infrastructure, and establishing high-performance computing systems, as outlined in the Continental AI Strategy.
Monitoring and evaluation (M&E) is due to be coordinated with Member States, involving the development of an African AI readiness index and a dedicated M&E portal.
A midterm review in 2027 aims to refine indicators and improve implementation.
The AU aims to also establish a web platform to track progress in expanding AI benefits, mitigating risks, and building capacities in skills, research, and innovation.
Other laws affecting AI
As of now, several African countries have adopted AI strategies, each with unique focuses and stages of implementation:
- Algeria's AI Research and Innovation Strategy, adopted in 20212, is currently under review to consider recent advancements in AI technologies. This strategy emphasizes the establishment of a center of excellence in AI, addressing ethical and security issues, and promoting international collaboration.
- Benin's 2023 AI and Big Data Strategy3 aims to lay the foundations for a robust, sustainable digital ecosystem, focusing on building a national data infrastructure, promoting AI solutions, developing human capital, fostering research and innovation, and implementing an AI governance framework.
- Egypt's National AI Strategy4, also adopted in 2021, covers the adoption, implementation, and use of AI in government and national development, with a particular focus on human capacity building, startup ecosystem enhancement, and R&D in AI, including natural language processing.
- Meanwhile, Mauritius published its AI strategy in 20185, recognizing the potential of AI to address social and economic challenges across various sectors such as manufacturing, healthcare, fintech, and agriculture. The strategy is guided by principles of accountability, ethics, and inclusiveness.
- Rwanda's AI policy6, published in 2023, serves as a roadmap for harnessing the benefits of AI while mitigating its risks. The policy focuses on positioning Rwanda as Africa's AI lab and responsible AI champion, building skills, creating an open and secure data ecosystem, and driving public sector transformation.
- Similarly, Senegal's AI strategy7, also published in 2023, aims to contribute to the country’s national development plan by developing human capacity, supporting solutions that address development problems, fostering public-private partnerships, and creating an inclusive and trusted AI ecosystem.
- Morocco has taken significant steps in AI development, including the establishment of the International Centre on Artificial Intelligence affiliated with the Mohammed VI Polytechnic University in AI and Data Sciences, designated a UNESCO category II center in November 2023.
- Other countries such as Ethiopia, Ghana, Kenya, Mauritania, Nigeria, South Africa, Tanzania, Tunisia, and Uganda are also making significant progress in defining AI policies and establishing institutions to drive AI development.
Definition of "AI"
As described in the Continental AI Strategy, AI refers to "computer systems that can simulate the processes of natural intelligence exhibited by humans where machines use technologies that enable them to learn and adapt, sense and interact, predict and recommend reason and plan, optimize procedures and parameters, operate autonomously, be creative and extract knowledge from large amounts of data to make decisions and recommendations for the purpose of achieving a set of objectives identified by humans." (p.14)
Territorial scope
The AU is made up of 55 Member States which represent all the countries on the African continent.
Member States are expected to domesticate the Continental AI Strategy by developing and implementing their own national AI strategies, tailored to their specific contexts and capabilities. The Strategy emphasizes the need for Member States to build capacity, establish governance frameworks, and align with the broader continental objectives set by the AU. Key enablers for this include increasing mobile penetration, improving digital infrastructure, and fostering public-private partnerships. However, inhibitors such as limited AI talent, inadequate data availability, and infrastructure gaps must be addressed to ensure successful implementation. Strengthening AI readiness and aligning national priorities with the broader Continental AI Strategy will be crucial for overcoming these challenges.
Entities engaged in partnerships, collaborations, or joint ventures with AU Member States or their institutions, are also required to adhere to local AI regulations. Additionally, foreign entities providing AI-based products or services to AU entities must meet the legal, ethical, and safety standards set by national and continental guidelines. Those processing or managing data originating from AU Member States must comply with data protection laws and AI regulations to ensure data security and privacy. Entities involved in funding or providing technical assistance for AI projects within AU Member States need to align their operations with both continental and national AI regulations to support the AU's strategic goals, as outlined in the Continental AI Strategy.
Sectoral scope
According to the Continental AI Strategy, future AI regulations will aim to:
- Regulate entities based on the sector in which they operate, ensuring responsible AI development and use across diverse fields
- Emphasize a multi-tiered governance approach grounded in ethical principles, democratic values, and human rights to mitigate risks and promote transparency and accountability
Key sectors regulated will include intellectual property, electronic communications and transactions, whistleblowing and protected disclosure, access to information, personal data protection, cybersecurity, consumer protection, and antitrust and competition.
Additionally, the regulations will focus on inclusion and empowerment, particularly for underrepresented groups such as women, girls, people with disabilities, youth, children, and rural populations.
Specific sectoral regulations will also address labor protections for gig and platform workers, standards for public procurement of AI systems, regulatory approval for AI as medical devices in healthcare, and alignment with international standards for social media and content generation.
Overall, the AU's AI regulations will aim to ensure that AI's benefits are equitably distributed and its risks effectively mitigated across various sectors, fostering inclusive, fair, and sustainable AI ecosystems.
Compliance roles
The Continental AI Strategy delineates specific compliance roles for AI developers, deployers, and users to ensure responsible AI development, deployment, and use, which emphasizes a multi-tiered governance approach. This includes establishing regulatory sandboxes to promote innovation and independent oversight institutions to ensure transparency and accountability throughout the AI lifecycle.
AI developers are obligated to adhere to ethical standards, mitigate risks throughout the AI lifecycle, ensure transparency and accountability, and comply with data protection laws.
Deployers must comply with sector-specific regulations, conduct impact assessments, establish monitoring frameworks, and report any incidents or violations.
AI users are required to be informed about the AI systems they interact with, ensure data security, and adhere to usage policies.
Additionally, the Continental AI Strategy discusses independent oversight through institutions that enforce compliance and provide redress, stakeholder engagement in AI strategy design, and continuous research to assess new risks and develop policy innovations. These roles and obligations aim to foster a responsible, transparent, and accountable AI ecosystem across the continent.
Core issues that the AI Regulations seek to address
The Continental AI Strategy aims to address several core issues, including the risks to the rights and freedoms of individuals, the economy, and national security. The Strategy emphasizes the importance of ethical principles, democratic values, and human rights to safeguard against potential harms posed by AI technologies.
Additionally, the Continental AI Strategy seeks to mitigate economic risks by promoting fair competition, consumer protection, and responsible innovation.
Overall, the Continental AI Strategy strive to create a balanced and secure AI ecosystem that benefits society while minimizing risks.
Risk categorization
The Continental AI Strategy categorizes AI according to different levels of risk and outlines corresponding obligations for each level.
The regulations identify key risk dimensions, including environmental impact, social inequalities, biases, privacy concerns, gender disparities, job displacement, and threats to African values such as societal cohesion, democracy, and cultural heritage.
High-risk AI systems, particularly those with significant environmental footprints or those prone to perpetuating biases and discrimination, are subject to stringent oversight and impact assessments. Obligations for such systems include rigorous ethical reviews, transparency mandates, robust data protection measures, and mechanisms to mitigate biases.
Medium-risk AI applications, such as those impacting job markets or digital access, require continuous monitoring, public consultations, and adaptive policy frameworks to address emerging issues.
Lower-risk AI uses must still comply with general principles of transparency, accountability, and ethical standards, ensuring they do not inadvertently harm individuals or communities.
By categorizing AI systems based on risk levels, the AU’s regulations will aim to balance innovation with the protection of rights, economic stability, and cultural integrity.
Key compliance requirements
The Continental AI Strategy introduces key compliance requirements to ensure responsible AI development and deployment.
These include mandatory transparency in AI operations and public disclosure, robust cybersecurity and personal data protection measures, and ensuring AI explainability, where decisions can be understood and interpreted by humans.
Human control mechanisms must be in place to review and override AI decisions when necessary.
The regulations also require bias mitigation strategies, adherence to ethical standards, and accountability frameworks to align AI with human rights and social justice.
Conducting impact assessments and engaging a wide range of stakeholders, including public consultations, are essential to address potential risks and ensure inclusivity in AI strategies.
These requirements aim to foster a transparent, secure, fair, and accountable AI ecosystem across the continent.
Regulators
The enforcement of AI Regulations will likely be overseen by a combination of national and regional regulatory bodies, depending on the jurisdiction and sector, ensuring effective oversight and compliance. National AI Councils will play a critical role in formulating and implementing AI policies, while regional bodies will promote harmonization across Member States. Independent oversight institutions will ensure transparency and provide mechanisms for redress in case of violations. Additionally, the establishment of regulatory sandboxes will enable controlled testing of innovative AI solutions, fostering responsible innovation. Collaboration between sector-specific regulators, such as data protection authorities, cybersecurity agencies, and consumer protection bodies, will be essential to address the multi-faceted nature of AI risks and opportunities. These entities will work in coordination with the African Union’s overarching principles to align national efforts with the broader goals of the Continental AI Strategy.
Key regulators include National Data Protection Authorities for data protection and privacy compliance, sector-specific regulatory agencies for fields such as healthcare, finance, and telecommunications, and national cybersecurity authorities to protect AI systems from cyber threats.
Consumer protection agencies ensure AI technologies in consumer products comply with relevant laws, while National AI Councils or Commissions oversee AI development and regulation. Independent oversight institutions may also be established for independent review and enforcement.
Additionally, the Continental AI Strategy notes that governments should encourage access to diverse data sets through regulatory sandboxes that promote responsible AI innovation. These sandboxes facilitate legislative amendments allowing trials within limited geographical areas or time periods and include measures for close monitoring when supervision is required.
At the regional level, the African Union itself plays a coordinating role to ensure that Member States adhere to the overarching principles and standards set out in the AU's AI regulations.
Enforcement powers and penalties
The Continental AI Strategy does not specify detailed enforcement powers and penalties.
The AU aims to support Member States in establishing independent institutions to oversee AI use, enforce compliance with emerging standards, and provide access to redress and remedy where violations occur.
1 African Union Continental Artificial Intelligence Strategy available here. 2 Algeria's AI Research and Innovation Strategy available here.
3 Benin's 2023 AI and Big Data Strategy available here.
4 Egypt's National AI Strategy available here.
5 Mauritius AI strategy available here.
6 Rwanda's AI policy available here.
7 Senegal’s AI strategy available here.
Sulaiman Iqbal (Trainee Solicitor, White & Case, London) contributed to this publication.