Deepfakes in South Africa: A practical legal guide

Tuesday, January 13, 2026, 8:32
Author name
Theshaya Naidoo
The issue with deepfakes is for your average user it is difficult to know what is fake or real

Every few months, we see another warning about manipulated videos, fake audio clips, or scammers using AI-generated voices to trick victims.  For many legal practitioners, these incidents feel like something happening “out there,” mostly overseas, mostly hypothetical.  However, deepfakes have already reached South Africa, and clients are starting to ask what can be done when their image, voice, or reputation is exploited online.

In this blog, we’ll walk you through what deepfake technology is, how it is being used locally, and why it raises difficult questions for evidence, privacy, consent, and liability.  This will help you understand the emerging risks so that you can advise clients confidently, whether they are individuals, companies, or public institutions.

We begin with the basics, then move on to real-life South African examples that illustrate the scope of the problem.

 

What is deepfake technology?

Deepfakes are not ordinary edited videos.  They are AI-generated or manipulated images, audio, or video designed to appear authentic.  The technology uses machine-learning models that study a person’s face, movements, or voice patterns and then recreate them in new scenarios.  The result is a piece of content that appears authentic, even when the event never actually occurred.

From a legal perspective, there are two essential things to remember: (1) the barrier to entry is low because anyone with a smartphone and freely available software can produce a convincing deepfake, and victims often discover the content long after it has circulated, which complicates issues of consent, reputational harm, and urgent relief.  Legally, we are trained to treat photographs, CCTV footage, and voice recordings as reliable forms of evidence; however, deepfakes disrupt this assumption, forcing us to reassess how we understand authenticity, how we challenge manipulated material in court, and how we advise clients whose likeness has been misused.

 

Recent examples of deepfakes

To understand the risk, it is helpful to examine how deepfakes are already being used in South Africa, and the incidents are more widespread than many people realise. 

  • A well-known South African medical specialist recently discovered videos online where “he” endorsed products he had never used.  The clips were convincing enough that strangers contacted him for refunds and advice, believing the content was genuine.  This is a typical pattern, where the victim becomes aware of the damage only after it has been done.
  • Cybersecurity researchers warn that the threat could easily extend to political actors.  A fake video of a politician announcing a policy change or making an inflammatory remark could spread rapidly across social media and have real-world consequences before it is debunked.
  • Even outside major incidents, the everyday risk is on the rise.  With consumer apps capable of mapping a person’s face from a few photos, it is no longer only public figures who need to worry.  Any person with an online presence, including those with profiles on LinkedIn, Facebook, Instagram, or even WhatsApp, is now a potential target.

These examples demonstrate that deepfakes are already present in South Africa, affecting individuals and institutions, while the legal framework is still evolving to keep pace with this emerging technology.

 

What are the consequences for the perpetrators and for the people targeted?

When a deepfake circulates, two groups face consequences: the person who created or shared it and the person whose identity is misused.  The law does offer pathways for accountability, but in practice, the immediate impact is often felt most by the victim.  The legal consequences are briefly mentioned now, and a detailed legislative discussion will follow.

 

Consequences for perpetrators

If someone creates or distributes a deepfake in South Africa, they may face:

  • Criminal consequences, for example, where the deepfake is used to deceive, impersonate, or obtain money.
  • Civil liability, including claims for defamation, invasion of privacy, or infringement of a person’s identity or likeness.
  • Financial claims, especially when the deepfake forms part of a scam or causes measurable loss.

 

Consequences for the person targeted

For the subject of deepfakes, the consequences tend to appear immediately, often before any legal process begins.

Reputational impact

When a deepfake circulates, people often react to it as if it were genuine.  Professionals in South Africa have had to explain to patients, clients, or colleagues that a video or recording was fabricated.  Even after clarification, the impression created by the deepfake may linger.

Personal impact

Many victims describe a sense of discomfort or loss of control when they discover their face or voice was used without permission.  Where the content is sexual, misleading, or damaging to professional standing, this impact is concerning.

Business and financial consequences

If the deepfake presents the person as endorsing a product, directing a payment, or giving an instruction, they may need to engage customers or stakeholders to correct the situation.  This can take time and can affect trust in professional relationships.

Legal remedies exist, but the immediate effects are often reputational and practical.  Understanding the broader impact of these technologies is essential, so you can guide clients through the first steps while a legal approach is being considered.

 

How does South African law currently address deepfakes?

South Africa does not yet have a single law dedicated to AI or deepfakes.  Instead, when someone brings you a matter involving manipulated audio or video, you rely on a combination of existing statutes and common-law principles.  While there is currently no general duty to label generated or manipulated content, where labelling does occur, it comes from platform policies rather than South African law.

However, it is essential to note that these laws were not drafted with deepfakes in mind; yet, many of them can still provide victims with meaningful protection once a deepfake harms someone’s dignity, privacy, reputation, or safety.

Several Acts already prohibit conduct that deepfakes often involve, and while none of them use the word “deepfake,” the underlying behaviour fits squarely within their scope.

  • The Cybercrimes Act 19 of 2020 is often your starting point where a deepfake involves sexual or humiliating content, since Section 16 makes it an offence to disclose an intimate image electronically without consent.  The definition includes simulated images, which means a deepfake does not escape liability simply because it is artificially generated.  However, there is a practical difficulty because identifying the original uploader can be slow, and anonymity tools often complicate investigations.
  • The Films and Publications Act 65 of 1996 (as amended) prohibits distributing private sexual photographs or films without consent, where the intention is to cause harm.  Deepfake pornography clearly matches the harm this provision is meant to prevent.  However, the Act focuses on whether the original image was “private.”  Many deepfakes use publicly available photos and place them into explicit content.  Strictly applied, this wording can be limiting.
  • The Electoral Act 73 of 1998 prohibits the publication of false information intended to influence an election.  A deepfake of a political figure making a false statement, or a fabricated message about voting procedures, could fall within this restriction.  The challenge is real-time enforcement, especially online.
  • The Protection of Personal Information Act 4 of 2013 prohibits the processing of personal information without a lawful basis, and Section 99 allows victims to claim damages.  Deepfakes almost always involve personal information, such as a person’s face, voice, or image.  POPIA is often the straightforward civil route where the harm is reputational or emotional, because it focuses on the unlawful processing itself rather than on the content of the deepfake.
  • The Protection from Harassment Act 17 of 2011 allows victims to obtain a protection order to stop the conduct.  Since deepfakes are frequently part of ongoing online harassment, this Act provides guidance when someone needs immediate relief while more formal processes unfold.
  • The Electronic Communications and Transactions Act 25 of 2002 (ECTA) and the associated ISPA takedown system are practical tools for removing harmful content.  While they do not provide compensation, they are often essential when deepfakes spread quickly and need to be contained.

 

Challenges with enforcing deepfake laws in South Africa

It becomes clear that South Africa has a workable and fragmentary set of laws that can (potentially) be applied to deepfakes.  Still, operationally, the real difficulty lies not only in using the existing rules, but in recognising their limits.  If you regularly advise clients on online-harm matters, these are the practical challenges you may need to anticipate.

Access to justice remains uneven.

  • Deepfakes spread within hours, but court dates, opposing papers, and procedural steps unfold over days or weeks.  By the time a matter is heard, the content has typically been widely circulated.
  • Most victims cannot afford urgent applications, expert reports, or extended litigation.  Matters involving online abuse are everyday, but specialist pro bono support is limited and often oversubscribed.
  • Where the issue is reputational damage or intimate manipulation, waiting weeks for a court date often leaves the victim with little meaningful relief.

Global platforms respond slowly to local orders.

  • Courts can issue directions to platforms such as Meta or TikTok, but the practical work, getting the order served and processed, takes time and money.
  • Even when a platform complies, deepfakes tend to reappear through screenshots, reshares, cached versions, and private groups.  A single removal seldom resolves the problem.
  • Each platform has its own legal process and timelines.  Some respond promptly, while others do not acknowledge urgent safety risks until the content has already spread widely.

Perpetrators are challenging to identify.

  • Most deepfake creators use VPNs, temporary email addresses, overseas phone numbers, or accounts that have been deleted.  Once the profile disappears, investigators have little to trace.
  • Cybercrime units exist in South Africa, but they are unevenly resourced, and many stations are unfamiliar with complaints related to deepfakes, resulting in victims who are often redirected multiple times before a docket is opened.
  • Identifying an account behind a deepfake may require forensic analysis, cooperation from foreign service providers, and technical tools that are not always available to investigators.

Obtaining identifying information from platforms is slow.

  • Even when a lawyer submits the correct legal documentation, platforms may take weeks to release basic information such as IP logs or account details.
  • When platforms refuse or delay, attorneys often need to approach the court for more specific disclosure orders, adding another layer of procedure and cost.
  • Civil claims and criminal prosecutions depend on knowing who created or distributed the content.  Without that information, matters remain at a standstill.

Thus far, it has become clear that while South Africa has an increasing need for deepfake regulation, and while we have some ‘interpretable’ laws, enforcement often lags far behind the harm.  The following section examines what individuals can realistically do to protect their online image, taking into account these enforcement limitations.

 

How to protect your image online: Practical steps for the South African context

Due to the limited means of existing laws regarding deepfakes, most victims need fast, practical steps to contain the harm, followed by a more straightforward legal strategy once they understand what they’re dealing with, because both of these approaches are important and work together.

 

Non-legal approaches

  • Use platform tools and reporting systems:  Major platforms have built-in features for reporting impersonation, manipulated media, and harmful content.  In many cases, this leads to faster removal than waiting for formal legal steps.  Victims should use these tools as soon as the deepfake appears, ideally while also preserving evidence.
  • Use publicly available detection tools (with realistic expectations):  Tech companies and researchers continue to develop tools designed to spot manipulated media.  They are not perfect, but they can help victims understand what they’re dealing with.  For example, Microsoft’s Video Authenticator Tool is one such example, which analyses photos and video frames in real-time and produces a score that estimates the likelihood of manipulation.  It detects subtle features, such as unnatural blending lines or unusual shading, that ordinary viewers might miss.

For many victims, the technological response is simply the first step.  Once the immediate shock is managed and the content is reported or contained, the next question becomes: What legal remedies are available, and how can they be activated?

 

Legal approaches

When a deepfake causes real harm, legal teams require a structured, time-sensitive plan that aims to contain the spread, secure evidence, and compel platforms or perpetrators to cooperate.  Once the immediate crisis is stabilised, practitioners can consider longer-term remedies such as interdicts, damages, or identity-based claims.

  • Secure the evidence immediately:  Save the video, URLs, timestamps, and screen recordings.  This forms the foundation for any civil claim, protection order, or criminal complaint.
  • Use urgent legal remedies to stop the spread:  Legal teams can send urgent takedown notices citing the Cybercrimes Act, POPIA, the Films and Publications Act, or common-law iniuria.  If harassment is ongoing, a protection order can force the perpetrator to stop contact and may require service providers to assist with identification.  In severe cases, opening a criminal matter allows prosecutors to seek identifying information through section 205 subpoenas.
  • Pursue civil relief for longer-term protection:  Depending on the harm, victims can seek interdicts, claims for defamation or dignity, false-endorsement remedies, or POPIA-based damages, where Courts may order removal of the content, preservation or disclosure of platform data, and payment for reputational or privacy harm.

 

Conclusion

Deepfakes are becoming an integral part of everyday legal work, and South Africa’s current framework can only extend so far.  The tools we have are helpful, but they rely on people acting quickly, knowing what evidence to keep, and understanding which remedies make a difference.

Until we have more explicit rules for synthetic media, the best protection is a mix of practical vigilance and a focused legal response.  For practitioners, that means guiding clients through the first hours, choosing remedies that match the harm, and recognising where the system still falls short.  Awareness and preparation remain the strongest safeguards we have.

 

About the author

Theshaya Naidoo

Theshaya Naidoo is a PhD (Law) Candidate and Canon Collins Scholar.  Her research focuses on 'The Legal & Ethical Implications of Neurotechnology on the South African Criminal Justice System'.  She holds an LLM in Medical Law, where her thesis focused on 'The necessity of sui generis AI regulation in South Africa'.  Theshaya is a Gawie le Roux Institute of Law alumna and top achiever in the 400-hour GLR PVT School.

Last updated on 10 December 2025.

×

Join Telegram group

 
Select one or more group/s by pressing ctrl and clicking on the desired group/s you wish to join.
 
×

Admission Application Template Pack

Fill out this form to download your FREE copy of our template pack for the application to be admitted as legal practitioner.

The template pack includes:

  • A checklist for the application for admission in terms of the Legal Practice Act
  • A list of requisite annexures
  • A template for the notice of motion
  • A template for the founding affidavit
  • A template for the supporting affidavit