Commentary on South Africa’s National Draft Artificial Intelligence Policy
At a glance
- South Africa’s Draft National Artificial Intelligence Policy was gazetted on 10 April 2026, with public comments due by 10 June 2026
- It adopts a risk-based approach modelled on international frameworks such as the EU AI Act, with stricter requirements for high-risk AI systems in sectors like healthcare, financial services, and law enforcement.
- Regulatory frameworks should balance oversight with flexibility to avoid stifling innovation, as fostering AI innovation is critical to South Africa’s economic growth and global competitiveness.
- Organisations should use the public consultation period to assess their AI use, identify potential regulatory exposure, and consider engaging in the policy development process.
Recent South African experience with deepfakes, voice cloning, and AI generated impersonation illustrates the real world risks associated with AI deployment. These risks are currently addressed through a patchwork of existing laws such as the Protection of Personal Information Act 4 of 2013, the Cybercrimes Act 19 of 2020, and other sector specific regulations. The Draft AI Policy signals a move toward a more coherent and AI specific regulatory response, with specific focus on transparency, accountability, and harm mitigation. As such, it should be treated as an early signal of future compliance expectations.
Regulatory framework
The Draft AI Policy proposes the establishment of a new AI governance ecosystem, including a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and a proposed AI Insurance Superfund intended to compensate individuals harmed by AI systems where liability is uncertain.
The Draft AI Policy does not adequately explain how these structures will interact with existing regulators such as the Information Regulator, ICASA, or the financial sector regulators. The precise mandates, independence, funding and accountability mechanisms for each body are also not clear. The lack of clarity creates a risk of duplication, and issues regarding oversight and jurisdiction.
The proposed AI Insurance Superfund is particularly underdeveloped. It does not address who funds the Superfund, what harm qualifies, how causation is assessed, or how claims interact with other intertwined legislation, such as POPIA.
Risk classification
Central to the Draft AI Policy is a risk based regulatory framework modelled on international frameworks, including the EU AI Act. Higher risk AI systems, particularly those deployed in sensitive sectors such as healthcare, financial services, law enforcement, and critical infrastructure, are expected to have stricter regulatory requirements.
However, the Draft AI Policy does not define what constitutes a high, medium, or low risk AI system. Until clear definitions are introduced, organisations face uncertainty when attempting to assess regulatory exposure, particularly for organisations that build or use generative AI, automated decision making tools, and large language models.
Data governance and accountability
The Draft AI Policy seeks to align AI governance with POPIA, with specific reference to automated decision making under section 71. It promotes data protection by design as a baseline requirement and reinforces transparency as a core principle.
Key proposals include mandatory watermarking of training data for large language models, the development of cross border data flow protocols to protect data sovereignty, and a requirement for “sufficient explainability” for high risk AI systems.
Whilst these principles will require sector specific guidance to ensure that they are pragmatic and can be applied in day-to-day business operations, this is a step towards oversight and formulating sector-specific regulation on AI.
Practical considerations
Regulatory frameworks for AI systems should strike a balance between oversight and flexibility to avoid stifling innovation. Fostering AI innovation in South Africa is critical to driving economic growth, creating jobs, and positioning the country as a competitive player in the global digital economy.
The public consultation process offers an opportunity for organisations to influence the shape of South Africa’s future AI regulatory framework. Organisations – particularly those in financial services, healthcare, technology, and digital media – should begin mapping their existing and planned use of AI and identifying where they may fall into higher risk categories. We encourage organisations to submit written comments, particularly where the AI use cases have already been implemented.
The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2026 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us cliffedekkerhofmeyr@cdhlegal.com.
Subscribe
We support our clients’ strategic and operational needs by offering innovative, integrated and high quality thought leadership. To stay up to date on the latest legal developments that may potentially impact your business, subscribe to our alerts, seminar and webinar invitations.
Subscribe