Social media and AI: Legal risks and South Africa’s response

Artificial intelligence (AI) is reshaping social media at a pace that South African law was never designed to match. Deepfake videos, voice cloning and AI-generated impersonations are no longer hypothetical threats – they are already causing real harm locally. This article outlines the risks, examines South Africa’s existing legal framework, draws on international legal developments to indicate where our law may be heading, and offers practical recommendations.

18 Mar 2026 4 min read Article

At a glance

  • South African law clearly prohibits the misuse of identity through deepfakes, but the enforcement gaps leave victims exposed.
  • In line with international legislative developments in this regard, Parliament must update social media laws so that platforms are directly accountable, mandate watermarking of content generated by artificial intelligence and ensure takedown systems actually work. 
  • Until dedicated legislation arrives, the combination of existing statutes, constitutional protections and practical vigilance remains the strongest safeguard available.

The risks

A deepfake is AI-generated or manipulated media (e.g. video, audio or images) designed to make it appear as though a person said or did something they never did. The dangers are threefold: deepfakes deceive audiences into believing false content, they enable cybercrimes and reputational harm, and they can be created and published by anyone, including anonymous social media users.

South Africa has already experienced this first-hand. In 2024, broadcast journalist Leanne Manas had her image used in fake endorsements for weight-loss products and online trading platforms on Facebook and TikTok. In 2025, Professor Salim Abdool Karim appeared in a deepfake video making anti-vaccination statements while endorsing counterfeit heart medication. Deepfake videos of Elon Musk induced South Africans to invest in fraudulent financial schemes.

Voice cloning poses an equally serious threat. With just a few seconds of audio, AI can now replicate a person’s voice with natural intonation, rhythm and emotion. The risk extends beyond public figures: anyone with an online presence, including a LinkedIn or WhatsApp profile, is a potential target.

The South African legal framework

South Africa has no single law dedicated to AI or deepfakes. Instead, individuals must rely on a combination of existing statutes and common law principles that were not drafted with this technology in mind. Despite this, several laws can offer meaningful protection.

The Cybercrimes Act 19 of 2020 (Cybercrimes Act) criminalises the electronic disclosure of intimate images without consent under section 16, and the definition extends to simulated images, meaning a deepfake does not escape liability simply because it is artificially generated. The Protection of Personal Information Act 4 of 2013 (POPIA) prohibits the processing of personal information without a lawful basis, and section 99 allows victims to claim damages, often the most straightforward civil route where harm is reputational or emotional. The Electoral Act 73 of 1998 prohibits publishing false information intended to influence elections. The Films and Publications Act 65 of 1996 prohibits distributing private sexual photographs or films without consent to cause harm, though its focus on whether the original image was “private” can be limiting where deepfakes use publicly available photos. The Protection from Harassment Act 17 of 2011 enables victims to obtain protection orders to stop ongoing online harassment.

South African common law also provides recourse through the actio iniuriarum. The Supreme Court of Appeal confirmed in Grütter v Lombard and Another (628/05) [2007] ZASCA 2; [2007] 3 All SA 311 (SCA); 2007 (4) SA 89 (SCA) (20 February 2007) that a person’s identity is protected from unauthorised exploitation, supported by the constitutional guarantee of human dignity. In Kumalo v Cycle Lab (Pty) Ltd (31871/2008) [2011] ZAGPJHC 56 (17 June 2011), the court held that using someone’s likeness for false endorsements infringes both identity and privacy rights. These precedents can apply to deepfake misuse.

International developments

International jurisdictions are moving rapidly, signalling the direction South Africa’s own framework may take. In the US, the TAKE IT DOWN Act, 2025 became the first federal law to criminalise non-consensual intimate deepfakes, requiring platforms to remove such content within 48 hours. Tennessee’s ELVIS Act, 2024 was the first to extend publicity rights expressly to AI-generated voice clones. In the European Union, the AI Act requires mandatory disclosure when content is AI-generated and imposes fines of up to 6% of a company’s global turnover for non-compliance. The UK’s Online Safety Act, 2023 and Data (Use and Access) Act, 2025 have criminalised both the sharing and the creation of non-consensual intimate deepfakes, and regulators have launched investigations into AI tools generating such content.

South Africa is yet to announce specific AI regulation proposals but has released a National AI Policy Framework through the Department of Communications and Digital Technologies, signalling a future risk-based regulatory model. This framework draws inspiration from global standards and may form the basis for a future AI Act.

Recommendations

The primary challenge in South Africa is enforcement rather than the absence of prohibitions. Courts face capacity constraints, litigation remains time consuming and costly, perpetrators hide behind anonymous profiles, and global platforms respond slowly to local court orders. Organisations should map all AI use across their operations, update policies to address synthetic media, implement deepfake-aware incident response plans, and ensure contracts explicitly address AI-generated content. Individuals should preserve evidence immediately (screenshots, URLs and timestamps), report content through platform tools and seek urgent relief through takedown notices citing the Cybercrimes Act, POPIA, or common law iniuria.

Conclusion

South African law clearly prohibits the misuse of identity through deepfakes, but the enforcement gaps leave victims exposed. Parliament must update social media laws so that platforms are directly accountable, mandate watermarking of AI-generated content and ensure takedown systems actually work. Until dedicated legislation arrives, the combination of existing statutes, constitutional protections and practical vigilance remains the strongest safeguard available.

 

 

The information and material published on this website is provided for general purposes only and does not constitute legal advice. We make every effort to ensure that the content is updated regularly and to offer the most current and accurate information. Please consult one of our lawyers on any specific legal problem or matter. We accept no responsibility for any loss or damage, whether direct or consequential, which may arise from reliance on the information contained in these pages. Please refer to our full terms and conditions. Copyright © 2026 Cliffe Dekker Hofmeyr. All rights reserved. For permission to reproduce an article or publication, please contact us cliffedekkerhofmeyr@cdhlegal.com.