If You Use AI to Write Governance Policy, You Had Better Check Its Work

  • April 26, 2026

 

South Africa's Draft National AI Policy contained fictitious citations in its reference list. The tools are not the problem. The absence of human oversight is.

A Policy That Proved the Point It Was Trying to Make

South Africa's Draft National Artificial Intelligence Policy was gazetted for public comment on 10 April 2026. It is an ambitious document: 86 pages covering governance, infrastructure, ethics, skills development, and a proposed regulatory architecture that includes a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund.

Shortly after publication, researchers checking the document's reference list found that several of the cited academic sources did not exist. According to a report by News24, some citations pointed to papers that could not be verified, and in some cases authors were credited with research on topics they had not published on. The DCDT characterised the draft as a "work in progress" and a "point of departure," acknowledging that the final policy would require extensive external consultation.

The working assumption, widely shared in the technology and legal community, is that portions of the document were drafted using a generative AI tool, and that the output was not adequately verified before the document was gazetted. Whether that is precisely what happened is for the DCDT to clarify. What is not in dispute is that a policy document intended to govern AI in South Africa contained references that turned out to be unreliable.

That is worth sitting with. A document whose stated purpose is to ensure responsible, trustworthy, and accountable AI use was itself a demonstration of what happens when AI outputs are used without sufficient human oversight.

What a Hallucination Actually Is

The term gets used loosely so it is worth being precise about what it means technically.

In the context of large language models, a hallucination is not a crash or an error message. The system does not malfunction. It does exactly what it is designed to do: generate text that is statistically consistent with its training data. When asked to produce a citation, it produces something formatted like a citation, with an author name, a plausible title, a publication year. When it does not have reliable information to draw on, it fills the gap with invented content that carries the same surface appearance as accurate content.

The model does not know it is doing this. It has no awareness of the distinction between what it knows and what it is constructing. It produces the output with the same confident presentation whether the underlying information is accurate or fabricated. There is no signal in the text that tells the reader which is which.

This is a known, documented characteristic of every large language model currently in production. It is not a bug being worked on. It is structural to how these systems generate text. Understanding that is the starting point for any serious conversation about AI governance.

How do you spot one? At minimum: verify every citation independently. Do not trust that a correctly formatted reference points to a real source. Cross-reference statistics against the original research. Question numbers that are suspiciously precise. If a claim cannot be traced to a verifiable source, it needs to be treated as unverified, regardless of how authoritatively it is presented. In a policy document that will inform national legislation, unverified is not acceptable.

South Africa Is Not the First

The DCDT situation did not happen in isolation. South African courts have dealt with this directly and repeatedly.

In the Mavundla v MEC case in 2025, Judge Elsje-Marie Bezuidenhout found that only two of nine citations in a set of heads of argument were legitimate. The remainder were fictitious. The court described the conduct as irresponsible and unprofessional and referred the matter to the Legal Practice Council.

In Northbound Processing v South African Diamond and Precious Metals Regulator, decided in June 2025, Acting Judge DJ Smit found that multiple authorities cited in the applicant's heads of argument did not exist. The citations had been generated by an AI tool called Legal Genius. Junior counsel accepted responsibility, citing time pressure and inadequate verification. The matter was again referred to the Legal Practice Council. Notably, even the corrected version of the heads that counsel submitted contained further incorrect citations.

These are not isolated incidents involving careless individuals. They are early indicators of a systemic pattern: AI tools being used to generate substantive professional work, and that work being submitted without the verification that professional standards require. The policy document adds a new and more concerning dimension to the same pattern, because the stakes of national legislation are higher than any single court matter.

The Skills Gap Is the Real Story

I have written previously about the gap between AI awareness and AI implementation across African organisations. Our market research, conducted across 212 organisations on the continent, found that 72% of enterprises recognise the importance of AI, while only 35% have moved into formal implementation. The reasons are well understood: cost perception, skills scarcity, compute access, and governance uncertainty.

The DCDT situation adds another dimension to that gap. It is not just about whether organisations have the technical capability to deploy AI. It is about whether they have the human capital to govern what they deploy.

82% of the organisations we surveyed lack internal AI expertise. That number means, in practice, that the majority of organisations using AI tools have limited internal capacity to critically evaluate the outputs those tools produce. They can run the prompt. They may not have the domain knowledge or the process discipline to interrogate the result before it becomes a report, a submission, or a policy document.

This is not a criticism of those organisations. The skills gap is real and it predates the current AI adoption wave. But it is a clear argument for why governance infrastructure needs to be built deliberately, not assumed.

Governance Is Not the Opposite of Speed

When governance comes up in AI conversations, the instinctive reaction from many practitioners is that it slows things down. That it is compliance language dressed up as strategy. That it stands between an organisation and the productivity gains AI can deliver.

The DCDT episode reframes that argument. A policy document that needed to be revised because of unreliable citations is not a fast output. It is a liability. The time cost of rework, the reputational cost of public scrutiny, and the credibility cost of a regulatory document that contains sources that cannot be verified, all of that exceeds whatever time was notionally saved by skipping the verification step.

Governance, applied properly, is what makes AI outputs usable. It is the verification layer between generation and publication. It is the human judgment that the tool cannot supply for itself. It does not slow the process down. It makes the output worth using.

What the afrAIca Framework Is Built to Address

At afrAIca, the foundational principle is assessment before action. No AI deployment without a readiness baseline. Not because technology should be delayed, but because implementation without readiness assessment is how organisations end up with expensive problems that were entirely avoidable.

Our eight-domain assessment methodology is aligned to the South African National AI Maturity Framework developed by the CSIR, and draws on ISO 37004. One of those eight domains covers regulatory and governance awareness specifically: the degree to which an organisation understands its obligations and has built the internal structures to meet them. In most assessments we conduct, this domain scores lowest. That is consistent with the broader pattern.

The assessment is not a compliance exercise. It is a gap analysis. It tells an organisation precisely where its governance infrastructure is absent before AI deployment creates exposure. The governance domain sits at 8% of the total readiness score, which reflects its actual weight: governance alone does not determine AI performance, but its absence creates disproportionate risk. An organisation that cannot verify its AI outputs before they become consequential documents is not ready to deploy AI in high-stakes contexts. Full stop.

The afrAIca model runs through three pillars: Consult, Develop, Implement. The Consult phase is where governance gaps are identified and addressed. What happened with the draft AI policy is a textbook illustration of what the Consult phase is designed to prevent: AI tools used in consequential work, without the governance structures in place to ensure the outputs are reliable.

Human Capital Is Not Optional

Africa's AI sovereignty conversation has, rightly, expanded beyond data residency. The argument that sovereignty means keeping data within South African borders is necessary but not sufficient. Genuine sovereignty requires that African institutions have the human capital and the internal capability to make independent, informed decisions about how AI is used, and to verify what it produces.

An institution that relies on AI-generated outputs without the internal expertise to interrogate those outputs is not operating independently. The tool is driving. The human is approving. That is a governance failure regardless of where the data is stored.

Building that internal capability is not a short-term project. It requires investment in skills development, in process design, and in the kind of institutional culture that treats AI outputs as starting points for human judgment rather than endpoints. That investment needs to start now, before the legislative environment hardens around AI use cases, and before more consequential documents are published with citations that cannot be found.

The Comment Period Is Still Open

The Draft South Africa National AI Policy is open for public comment until 10 June 2026. Submissions go to aipolicy@dcdt.gov.za.

There is substantive work in that document worth engaging with seriously. The ambitions around inclusive economic participation, ethical AI deployment, and continental leadership are legitimate. The regulatory architecture it proposes is worth interrogating carefully. There are strong arguments that the sequencing needs revision: infrastructure and skills development before regulatory bodies, not after.

But the citations issue needs to be resolved with verifiable sources, reviewed by people with the domain knowledge to distinguish reliable research from AI-generated approximations. That is not an unreasonable standard. It is the minimum standard a policy document should meet, and particularly one that intends to govern how AI is used across the country.

Governance is not what you build after you deploy AI. It is what you build so that what AI produces can be trusted.

Related Articles

Your Grandmother in a Rural Village Is the Target. And She Doesn't Even Know It.

March 20, 2026
How AI-generated deepfakes became the most dangerous weapon against African democracy — and what every leader needs to...

Hospitals Speak a Language Not Understood: AI Compassionate Translator

November 11, 2025
My mother spent over 120 days in ICU before she passed away. Four months. Seventeen weeks. Nearly three thousand hours...

Why Off-the-Shelf AI Will Never Drive in Nairobi

October 20, 2025
Why Off-the-Shelf AI Will Never Drive in Nairobi Will a self-driving Tesla navigate Nairobi traffic? The answer will...