Daniel Kokotajlo, an AI researcher, former OpenAI employee and co-author of the widely discussed “AI 2027” scenario now says progress toward artificial general intelligence (AGI) is “somewhat slower” than he and his collaborators originally predicted.
Kokotajlo helped lead the AI 2027 project, a detailed forecast released in April 2025 by the independent AI Futures Project. That scenario mapped out how AI might develop through the latter half of the decade, culminating in systems capable of fully autonomous coding and, ultimately, recursive self-improvement, an intelligence “explosion” that could yield superintelligence.
At the time, the report suggested that such milestones were plausibly within reach by 2027, making its predictions a focal point in discussions about existential risk and AI governance. The scenario was designed to be quantitative and concrete, recognising uncertainty but positioning 2027 as a central estimate for when key breakthroughs might occur.
But in recent public updates, Kokotajlo and collaborators have revised that outlook. According to reporting from multiple outlets, he now acknowledges that real-world AI development is running behind the pace laid out in the original AI 2027 timeline.
Autonomous coding, seen as a pivotal threshold on the path to AGI, is now pushed back toward the early 2030s, and the arrow on achieving superintelligence points toward around 2034, rather than the late 2020s.
Why the slowdown?
The revision reflects a growing recognition within parts of the AI community that benchmarks, real-world integration and the complex dynamics of deployment are not progressing as smoothly as theoretical models suggested. Current AI systems, even the most advanced generative models, remain proficient at narrow tasks but lack the continuous autonomous reasoning and self-improving capabilities that would underpin an intelligence explosion.
This recalibration doesn’t mean the risk conversation has disappeared. Kokotajlo and others still emphasise that powerful AI systems will have profound societal, economic, and geopolitical effects, and that uncertainty should be a reason for investment in safety, oversight and international cooperation, not complacency.
Industry context
Other figures in AI research paint a picture of uneven, “jagged” progress rather than a smooth race to superintelligence. Some of OpenAI’s own leaders have publicly acknowledged limitations in current systems’ autonomy and continuous learning capabilities, hallmarks of true AGI, even as they pursue ever more sophisticated agents.
At the same time, debates continue over how to interpret timelines and what they should influence in policy. Some academics argue that discussions of existential risk have been amplified by speculative narratives and may detract from pressing issues like economic disruption, bias and the concentration of computational power.
What’s next?
For policymakers, investors and the broader public, the takeaway is nuanced. Yes, the dramatic end-of-decade deadlines that once circulated in tech circles are now being updated. But the fundamental uncertainties, about when AGI might arrive, what form it will take and how society will adapt, remain unresolved.
RELATED STORIES
AI now lies, denies, and plots: OpenAI’s o1 model caught attempting self-replication

Capacity Europe 2026
The 24th anniversary edition of Capacity Europe 2025 will bring together 3,500+ decision-makers from the global connectivity and digital infrastructure community.





