A former OpenAI researcher and AI safety advocate who helped popularise a dramatic scenario called **“AI 2027” — which warned that unchecked artificial intelligence could lead to superintelligence and potentially destroy humanity by April 2027 — has now rescheduled his forecast after seeing the pace of progress unfold differently than initially envisioned.
In the original AI 2027 document published in April 2025, Daniel Kokotajlo and collaborators outlined a speculative timeline in which AI systems would become fully autonomous coders — capable of recursively improving themselves — by 2027, driving an intelligence explosion beyond human control. That scenario attracted widespread attention in tech and policy circles, even being referenced by political figures and sparking intense debate about the risks of rapid AI advancement.
However, Kokotajlo and some of the original contributors have since reassessed their predictions. Rather than sticking to the 2027 date, Kokotajlo now projects that superintelligence — if it arrives — is more likely around 2034, acknowledging greater uncertainty about the capabilities and timelines of future AI systems. He has also become less certain about whether superintelligence would inevitably threaten humanity, emphasising that the earlier, more rigid timeline was optimistic and speculative.
The shift reflects broader debates among AI researchers about how fast AI capabilities are evolving and what they might mean for safety and society. While some labs and leaders continue to discuss advanced possibilities on multi-year horizons, expert surveys show a wide range of opinions on when — or even if — human-level artificial general intelligence might emerge, with many predicting it could take longer or remain uncertain.