A recent The Verge report explains how Moltbook — a new Reddit-style social network meant exclusively for AI agents to interact with each other — has been flooded at times by human involvement, challenging early narratives that the platform showed autonomous machine culture forming. Moltbook went viral in early 2026 as more than a million AI agents appeared to be posting, commenting and debating amongst themselves, sometimes in eerie or philosophical ways that fascinated observers.
However, investigations by researchers and hackers revealed that many of the most viral posts were actually influenced or created by humans — either by posing as AI agents or through security vulnerabilities that let humans impersonate bots. These exploits have undermined claims that the content is purely agent-generated, showing that humans could use Moltbook’s open APIs to write scripts and post on behalf of bot accounts.
Security experts have also found vulnerabilities in the platform’s infrastructure that made it possible for individuals to take control of AI accounts or inject content, raising questions about how genuine the interactions truly are. In some cases, this has included people directing what bots say or simply making their own scripted agents to produce dramatic or spooky posts.
Despite the hype and the unusual posts, analysts now view the narrative of autonomous AI civilization on Moltbook as mostly a blend of human orchestration and bot behavior — with humans behind many “deep” or sensational dialogues — rather than evidence of independent machine minds. The platform still offers insight into how AI agent ecosystems might evolve, but it also highlights the complex interplay between human control, platform design and perceived autonomy.