South Africa was forced to withdraw its draft national artificial intelligence policy after a major credibility issue emerged—several of the document’s academic references were completely fabricated. Investigations revealed that at least six out of 67 cited sources did not exist, even though they were attributed to real journals and scholars. The discovery raised immediate concerns about how such a critical policy document passed multiple levels of review without proper verification.
The government had spent months developing the policy, which aimed to position South Africa as a leader in AI governance across Africa. It proposed ambitious measures such as creating a National AI Commission, an AI Ethics Board, and a regulatory authority, while promoting responsible and inclusive AI adoption. However, the entire effort was undermined when journalists cross-checked the bibliography and found that some references were likely generated by AI tools rather than sourced from real research.
Officials acknowledged that the most likely cause was the use of generative AI during the drafting process without adequate human oversight. The fabricated citations were described as an “unacceptable lapse” that compromised the integrity of the policy. As a result, the document was withdrawn, and the government promised accountability and a more rigorous review process before releasing a revised version.
Beyond the immediate embarrassment, the incident highlights a broader and growing issue with AI systems: hallucinated outputs that appear credible but are factually incorrect. Studies suggest this problem is becoming more widespread in academic and professional writing, emphasizing the need for strict human verification. In this case, the irony was striking—a policy designed to regulate AI failed because of the very risks it sought to address, underscoring the importance of understanding AI’s limitations before relying on it in high-stakes decision-making.