Google DeepMind CEO Demis Hassabis has issued a stark warning about artificial intelligence's critical flaw: inconsistency. He revealed that even the most advanced AI systems can ace elite mathematical competitions yet fail at elementary school problems—a vulnerability he says must be fixed before reaching artificial general intelligence (AGI). Hassabis described current AI as having "uneven intelligences" or "jagged intelligences"—excelling brilliantly in some dimensions while being easily exposed in others, echoing Google CEO Sundar Pichai's term "AJI" (artificial jagged intelligence). For instance, Google's Gemini models enhanced with DeepThink can win gold medals at the International Mathematical Olympiad but "still make simple mistakes in high school maths".
Hassabis emphasized that solving this inconsistency requires more than just scaling up data and computing power. He explained that "some missing capabilities in reasoning and planning in memory" still need to be cracked, calling for better testing methodologies and "new, harder benchmarks" to precisely map AI strengths and weaknesses. This sentiment aligns with OpenAI CEO Sam Altman's recent assessment following GPT-5's launch, where Altman admitted the model lacks continuous learning capabilities—something he considers essential for true AGI. The warnings underscore a growing recognition among AI leaders that current systems' propensity for hallucinations, misinformation, and basic errors must be addressed before achieving human-level reasoning.
Hassabis has also warned against repeating social media's "move fast and break things" mentality, emphasizing the need for responsibility over speed. He highlighted risks such as addiction, mental health issues, and the formation of echo chambers, citing studies showing AI can replicate these patterns. This cautionary tale is reminiscent of social media platforms' early failures to anticipate consequences at scale. Hassabis urged responsible deployment, scientific testing, and international cooperation to establish regulatory standards, prioritizing people over profits.
To address these concerns, Hassabis stressed the importance of developing more robust and reliable AI systems. He called for increased collaboration between AI researchers, policymakers, and industry leaders to ensure that AI development is guided by a commitment to transparency, accountability, and human well-being. By prioritizing responsible AI development and deployment, we can mitigate the risks associated with AI and unlock its full potential to benefit society. Ultimately, Hassabis' warning serves as a timely reminder of the need for caution and cooperation in the pursuit of AI advancement.