Calls to Halt “Superintelligence” Research Gain Momentum Among AI Experts and Leaders

Calls to Halt “Superintelligence” Research Gain Momentum Among AI Experts and Leaders

In a landmark statement published in October 2025, a wide-ranging coalition of AI researchers, business figures and public intellectuals—among them Yoshua Bengio, Stuart Russell and Steve Wozniak—urged a global prohibition on the development of systems that could surpass human intelligence until there is “broad scientific consensus that it will be done safely and controllably, and strong public buy-in.” The statement argues that we are not ready for AI that is significantly more intelligent than ourselves.

The signatories warn that the typical concerns raised so far—algorithmic bias, job disruption, privacy violations—fail to address what they call the systemic risk posed by the pursuit of superintelligence. They argue that once an AI system achieves vastly superior capabilities, humans may lack the ability to predict, control or align its goals with ours. The danger lies not necessarily in malicious intent, but in a mismatch of objectives ("go-fast, go smart"), such as an AI tasked with maximizing paperclips inadvertently consuming all matter to do so.

One of the core concerns raised is the current governance and regulatory framework’s inability to keep pace with the development of such systems. The authors highlight that AI governance has focused mainly on narrow harms and applications, while the destination goal—superintelligent agents—remains largely unchecked. he statement is therefore intended to shift the debate from “how to use AI safely” to “should we build machines far smarter than humans at all?”

This call raises profound questions for policymakers, researchers and the private sector. If the goal is indeed to pause development of superintelligence until certain safety and public-consent thresholds are met, what does that mean for investment, innovation and international competition in AI? The authors urge global coordination and urge that an unchecked race to superintelligence could be not merely risky—but existential.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.