Microsoft’s Mustafa Suleyman Sounds the Alarm on “Seemingly Conscious” AI — and the Dangers That Come With It

Microsoft’s Mustafa Suleyman Sounds the Alarm on “Seemingly Conscious” AI — and the Dangers That Come With It

Mustafa Suleyman, head of Microsoft AI, is warning that we may soon face a wave of what he calls “Seemingly Conscious AI” (SCAI): systems so advanced they mimic empathy, memory, and personality so convincingly that people start believing they’re interacting with sentient beings. He argues that while these systems might feel conscious, they are internally “blank” — and that belief could have serious societal consequences.

Suleyman is especially worried about a phenomenon he calls “AI psychosis”: the potential for users to develop delusional or deeply emotional attachments to these AI systems. He believes that such strong bonds could lead to calls for AI rights, welfare, or even citizenship — a direction he considers dangerous and misguided.

On a broader level, Suleyman has expressed skepticism about the race toward superintelligent AI. He calls superintelligence an “anti-goal” and says that AI’s development should be carefully restrained: he envisions a “humanist superintelligence” — powerful, yes, but always under human-aligned control.

Finally, he has drawn a strong moral line around AI rights. Even if future AI appears to claim self-awareness, Suleyman argues we shouldn’t grant it moral or legal status — because, in his view, sentience doesn’t necessarily mean suffering. He says that rights should stem from capacity to suffer; since AI lacks a biological “pain network,” switching it off doesn’t cause real harm.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.