A newly launched initiative seeks to create a more data-driven forecast of the development pace in artificial intelligence (AI). The project recognises that while excitement and speculation abound about breakthroughs, reliable predictions remain elusive because existing models often rely on intuition rather than systematic measurement. To reduce that gap, this effort plans to monitor metrics such as compute power, model size, training cost, research output and real-world deployment to build an empirical basis for forecasting.
The underlying challenge is enormous: forecasting AI progress involves both scientific and economic uncertainty. On the one hand, there are technical constraints—hardware, data, algorithmic innovation—that may slow down advances. On the other hand, AI also invites exponential growth dynamics, making change rapid and non-linear. The initiative underscores that anticipating the shape, scale and timing of the next leaps in AI will require combining these technical inputs with macro-level modelling of diffusion, adoption and impact.
Crucially, the project is not just about optimising projections; it also addresses the implications of speed. Faster-than-expected progress could stretch policy, regulation and adaptation of institutions. Slower progress, in contrast, could reshape how society plans for automation, labour markets and equity. By raising the quality of the data and transparency of assumptions, the effort aims to help governments, industries and academics make better decisions — whether about infrastructure, education or safety.
Ultimately, by creating a timeline grounded in measurable inputs rather than conjecture, the new project may shift how we talk about AI not just as a speculative “next frontier” but as a system with observable indicators and trajectories. In doing so, it may help reduce the uncertainty around when and how AI becomes transformative — and what that transformation might look like in practical, policy-relevant terms.