Every major commercial AI platform operating in South Africa is in breach of POPIA. Every single one. Not allegedly. Not arguably. Structurally, by design, and without exception.
We know because we audited them.
South Africa's Government Gazette No. 54477 landed on 10 April 2026, carrying the country's first draft national AI policy. Cabinet approved it on 25 March. The public comment window closes 10 June 2026. Most organisations have not read it. They are also, separately, not meeting the obligations of a law that has been in force since July 2021.
The two problems are connected, and the connection is costing South African organisations more than they realise.
afrAIca conducted a disclosure-level compliance audit of eight major commercial AI platforms against both instruments simultaneously: POPIA in full, and the Draft SA National AI Policy clause by clause. The platforms were Grok (xAI), ChatGPT (OpenAI), Gemini (Google), Copilot and M365 Copilot Chat (Microsoft), DeepSeek, Qwen (Alibaba Cloud), You.com, and Perplexity. Policies were reviewed as published between January and April 2026.
What follows is what we found.
Finding 1: Every Platform Fails POPIA Section 72. No Exceptions.
Section 72 of POPIA prohibits the transfer of personal information of South African data subjects to a foreign country unless that country has adequate data protection laws, the data subject has consented, the transfer is necessary for a contract, or another prescribed condition applies.
Not one of the eight platforms has obtained an adequacy determination, established binding corporate rules, or cited contractual clauses that satisfy section 72 specifically for South Africa. The EU-US Data Privacy Framework, which several platforms cite for European transfers, has no equivalent for South Africa. It provides no protection to South African data subjects.
Every time a South African employee or citizen uses any of these platforms on a consumer account and enters any personal information, whether explicitly or incidentally through a prompt, that data is transferred offshore in potential breach of section 72. This is not a theoretical risk. It is a standing, documented, structural compliance failure across the entire commercial AI market as it currently operates in South Africa.
The Draft AI Policy's language on this is pointed. It calls for cross-border data transfer controls and specifically invokes the prevention of "colonial-era data extraction practices" (§SBB SP3.2). The market's current state is precisely the condition the policy was written to correct.
For South African organisations, the obligation sits with you. Not the vendor. You are the responsible party under POPIA. You are accountable for the transfer.
Finding 2: One Platform Is in a Category of Its Own
All eight platforms present a section 72 exposure. DeepSeek presents an additional sovereign risk that separates it materially from the others.
All DeepSeek user data is stored in the People's Republic of China. China has no data protection adequacy determination vis-à-vis South Africa. More significantly, Chinese national security law, specifically the National Intelligence Law and the Data Security Law, imposes obligations on Chinese companies to provide state access to data they hold on request, without notification to the data subject and without a court order.
South African data subjects using DeepSeek have no POPIA protection in practice. Their data is not only transferred without a section 72 mechanism. It is held in a jurisdiction where state access is legally mandated and cannot be contractually prevented.
afrAIca's position is unambiguous: DeepSeek should not be deployed by any South African organisation for any use involving personal data.
Finding 3: Your Teenagers Are Legal Children Under POPIA. The Platforms Treat Them as Adults.
POPIA defines a child as a natural person under the age of 18 who is not legally competent to take decisions independently. Processing a child's personal information requires the prior consent of a competent person, a parent or guardian.
Every commercial AI platform assessed applies a 13-year minimum age threshold, aligned to the American Children's Online Privacy Protection Act standard. This creates a five-year definitional gap: South African users aged 13 to 17 are legal children under POPIA and entitled to full competent-person consent protections. The platforms treat them as adults.
South African schools are deploying AI tools in classrooms. Teenagers are using generative AI for research, homework, and creative work. The Draft AI Policy mandates a child-centric AI ethical framework and prohibits AI systems from exploiting children's behavioural patterns (§9.3.1). The platforms, as currently constituted, cannot satisfy this requirement.
Finding 4: Three Platforms Use Your Inputs to Train Their Models
POPIA section 15 prohibits further processing of personal information for purposes incompatible with the original collection purpose. Section 11(1)(a) establishes consent as the primary lawful basis for processing. Not opt-out. Opt-in.
xAI (Grok) and DeepSeek use user inputs to train their models without a documented opt-out mechanism. Google Gemini continues to process user data for safety and improvement even after a user disables activity tracking, and retains human-reviewed data for up to three years after account deletion. This directly conflicts with section 14(4), which requires destruction or deletion when the organisation is no longer authorised to retain the information.
Several other platforms offer opt-out mechanisms. Under POPIA, those are not sufficient. Consent requires an affirmative act. Opt-out does not satisfy the section 11(1)(a) standard.
Finding 5: Microsoft Is the Least Misaligned. That Is Not the Same as Compliant.
Microsoft M365 Copilot Chat is the only platform in this assessment that explicitly and unambiguously states that prompts and responses are not used to train foundation models. That is a material differentiator and the clearest alignment finding in the entire document set.
It is also not the full picture.
Microsoft's enterprise data protection model means that the organisation deploying M365 Copilot becomes the responsible party under POPIA for the processing that occurs. Microsoft acts as the operator. No enterprise currently deploying M365 Copilot without a POPIA-aligned operator mandate in terms of section 21, and a written processing instruction, is meeting its obligations as responsible party.
The Enterprise Data Protection agreement provides GDPR-oriented contractual protections. These are not equivalent to a POPIA section 21 operator mandate. There is also a residual section 72 exposure: LLM processing can route outside the South Africa region under high utilisation, even for organisations that have selected the SA region.
Choosing Microsoft is the most defensible position in the current market. It is not a clean one.
Finding 6: Shadow AI Is Where the Real Exposure Lives
Every finding above assumes deliberate organisational deployment. That is not how most section 72 exposure is actually created.
It is created by an employee using a personal ChatGPT account to draft a client proposal. By a finance team member pasting a spreadsheet into Gemini to clean data. By a recruiter using a free-tier tool to summarise CVs. Shadow AI, the use of commercial AI tools outside sanctioned, governed channels, is the highest-frequency enterprise risk we encounter across every sector and every size of organisation.
The Draft AI Policy requires a named accountable AI official with Shadow AI governance responsibility (§7.1, §9.3.3). It provides no implementation mechanism for how that official should detect, classify, or govern shadow use.
Of all the instruments assessed in this work, the afrAIca diagnostic is the only one with a named, scored Shadow AI governance dimension. Because this is where the compliance exposure lives in practice, it is also where the assessment conversation must start.
What Organisations Should Do Now
The Draft AI Policy's comment window closes 10 June 2026. Enforcement does not wait for finalisation. POPIA is already law.
Six steps every organisation deploying commercial AI should take immediately:
- Conduct a Shadow AI audit. Identify which platforms staff are using, on which account types, and with what categories of data. This is the starting point, not the end point.
- Designate an Information Officer with formal AI platform oversight responsibility. Not a committee. One named person, with a documented mandate.
- Establish a data classification policy. Personal information, special personal information, client data, and children's data should not enter any commercial AI interface without a documented, POPIA-compliant processing basis.
- If you are deploying Microsoft M365 Copilot, execute a POPIA-aligned operator mandate with Microsoft before any further personal data processing. The enterprise agreement alone is not sufficient.
- Document a section 72 transfer basis for every offshore AI platform in active use, or suspend use pending documentation.
- Remove DeepSeek from all organisational environments where personal data may be processed.
None of these steps require waiting for the policy to be finalised. All of them reduce standing exposure under existing law.
The comment period is an opportunity as well as a deadline. If the organisations and practitioners who understand the implementation reality of these requirements do not submit comment by 10 June 2026, the finalised policy will reflect the views of those who do. afrAIca will be submitting.
Your narrative on AI governance needs to be more than a plan on a slide. It needs to survive a regulatory audit.
Start that conversation at www.afraica.co.za.
#AgnosticAI #YourNarrativeAI #POPIA #AIGovernance #AfricanAI #afrAIca #AfricaLeadWithDigital
.png?width=2245&height=945&name=afrAIca%20Logo_AF%20(1).png)