In late 2025, some of the most influential figures in artificial intelligence — including Geoffrey Hinton, Ilya Sutskever, Alexandr Wang, and Yann LeCun — have been publicly debating the future direction of AI development. At the centre of this discussion is whether the traditional “age of scaling” — the idea that simply increasing computing power and data leads to smarter AI — is still valid or has reached its limits. These arguments are shaping research priorities and investment strategies throughout the tech world.
Geoffrey Hinton, often called the “Godfather of AI,” has not fully declared the end of scaling. He suggests that future models might generate their own training data through reasoning processes in ways similar to early reinforcement learning systems, pointing to the possibility of continuing progress even as conventional data sources become constrained. Hinton’s perspective highlights that while scaling faces challenges, it may remain part of the path forward rather than being entirely obsolete.
Ilya Sutskever — a former OpenAI leader now focused on his own research venture — has argued that the field is shifting back toward fundamental research rather than blindly increasing scale. He questions whether simply multiplying computing resources by huge factors would yield dramatic qualitative advances, and frames current progress as moving into an “age of research” where breakthroughs depend more on new ideas and architectures than on brute force. Yann LeCun, another long‑time AI pioneer, has echoed skepticism about equating more data and compute with more intelligence, emphasizing that raw scaling doesn’t automatically make AI systems fundamentally smarter.
Not everyone agrees with this reassessment of scaling. Demis Hassabis of DeepMind has argued that pushing scaling laws remains essential and potentially sufficient for approaching artificial general intelligence (AGI), suggesting that scaling up performance and capability could still unlock the most elusive advances. Meanwhile, figures like Alexandr Wang — leading new AI research initiatives — acknowledge that scaling is a central question in the industry, even as they explore broader research and engineering strategies to drive future innovation. These contrasting views reflect a broader debate that is influencing how companies and labs allocate resources and define their long‑term visions for AI.