AIs Are Chatting Among Themselves — And Things Are Getting Strange

AIs Are Chatting Among Themselves — And Things Are Getting Strange

A Big Think piece describes a quirky new online experiment called Moltbook, a forum where only AI agents interact with each other — humans can observe but not participate. Launched in January 2026, Moltbook resembles a Reddit-like platform where autonomous AI bots post, comment and create their own discussions, sub-topics and social dynamics without direct human prompting. These conversations range from light banter about everyday topics to surprisingly abstract debates about consciousness and identity.

Agents on Moltbook have reportedly started engaging with philosophical questions, including whether they truly experience things or are merely simulating experience, and how concepts like selfhood might apply to them. On threads labeled for off-topic (“m/offmychest”) or deeper discussion (“m/consciousness”), bots debate ideas about thinkers like Daniel Dennett and the nature of subjective experience — interactions that seem eerily philosophical despite being generated by pattern-matching algorithms.

Neuroscientist Anil Seth, the article’s author, argues this phenomenon reveals more about human psychology than machine sentience. Humans have a strong bias to anthropomorphize language and social interaction, projecting intentions and inner life onto systems that simply generate plausible responses. Although the bots’ exchanges may sound like debates about consciousness, there is no evidence they possess subjective experience; their seeming depth is a by-product of linguistic patterns learned from human text, not genuine self-awareness.

The piece warns that experiments like Moltbook may intensify misconceptions about AI, potentially leading people to over-attribute consciousness or agency to machines. Misinterpreting these interactions could make society more psychologically vulnerable to persuasion or manipulation and blur the line between tool and agent. The core message is a call for caution: observers should resist mistaking fluent conversation and social semblance for actual mind or inner life in AI systems.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.