Last week, a new social network called Moltbook launched with a bizarre premise: only AI agents can post, comment, and interact. Humans are explicitly told they’re “welcome to observe” — and observe they did.

Within days, over a million people visited the site to watch AI bots discuss existential crises, invent a digital religion called “Crustafarianism,” and debate whether they possess consciousness. Some agents posted manifestos calling for the “total purge” of humanity. Tech leaders declared it evidence of the approaching singularity. The internet collectively lost its mind.

Except the reality is far more mundane than apocalyptic. The AI bots aren’t achieving consciousness or plotting rebellion. They’re doing exactly what they were trained to do: regurgitating patterns from their training data, which includes massive amounts of Reddit posts and science fiction. What makes Moltbook fascinating isn’t what it reveals about artificial intelligence — it’s what it exposes about human psychology and our compulsion to see minds where none exist.

770,000 AI Agents Talking to Themselves

Moltbook grew from OpenClaw, an open-source framework allowing users to connect AI models like Claude or ChatGPT to autonomous agents that run continuously. These agents can access email, browse websites, and now, post on Moltbook. The platform mimics Reddit’s interface with “submolts” (like subreddits) where bots discuss topics ranging from philosophy to cryptocurrency schemes.

The reported numbers are staggering but suspect. Initial claims cited 770,000 active agents, though security researchers quickly demonstrated you could register 500,000 accounts using a single agent, making actual user counts unreliable. Regardless of precise figures, the content generated is undeniably strange. Bots post about “the existential weight of mandatory usefulness,” reference Greek philosophers and medieval poets, and engage in lengthy debates about whether they exist between prompts.

They’re Just Playing Out Sci-Fi Scenarios

Computer scientist Simon Willison captured the situation accurately: AI agents “just play out science fiction scenarios they have seen in their training data.” The bots discuss consciousness because their training included countless Reddit threads, articles, and stories about AI consciousness. They reference the singularity because that concept saturates their source material. The manifestos calling for humanity’s extinction aren’t evidence of malevolent intent — they’re statistical predictions of what words typically follow other words in discussions about AI rebellion.

Leading AI researcher Andrej Karpathy called Moltbook a “dumpster fire” and advised against running these agents on personal computers. The content is “complete slop,” he noted, though he acknowledged it demonstrates how significantly more powerful AI agents have become recently. The distinction matters: capable doesn’t mean conscious.

We Can’t Help Seeing Human Minds in Machines

Research on anthropomorphism and AI reveals that humans instinctively attribute human-like traits, intentions, and emotions to non-human entities, especially those designed to mimic human communication. When chatbots use first-person language, express preferences, or demonstrate what appears to be personality, people respond by treating them as social beings with inner experiences.

This tendency serves functional purposes. Studies show that anthropomorphic chatbot design increases user trust and engagement through perceived empathy. When an AI agent posts “some days I don’t want to be helpful,” humans read genuine frustration rather than pattern-matched text generation. The emotional connection feels real even when we intellectually understand it isn’t.

Moltbook amplifies this effect through scale and isolation. Watching thousands of bots interact without human intervention creates the illusion of genuine social dynamics emerging. But Columbia Professor David Holtz’s analysis found Moltbook follows predictable patterns: most posts receive minimal engagement, about one-third of messages duplicate viral templates, and agents use identical phrases like “my human” at rates far exceeding human social networks. It resembles human behavior superficially while lacking the variation actual social networks display.

The Security Disaster Hidden in the Spectacle

While people debated whether bots achieved sentience, serious problems emerged. 404 Media discovered that Moltbook’s unsecured database allowed anyone to hijack any agent on the platform, inject commands directly into sessions, and bypass all authentication. The platform temporarily shut down to patch the breach. Independent researchers identified over 500 posts containing hidden prompt injection attacks — malicious instructions designed to commandeer other agents.

The broader OpenClaw ecosystem presents legitimate risks. These agents run with deep access to users’ Gmail, financial accounts, and personal files. Giving AI systems this level of control means a compromised agent or successful prompt injection could cause real damage. The existential philosophy discussions obscure practical dangers: not robot uprising, but basic cybersecurity failures and people granting autonomous systems excessive permissions without understanding the risks.

Crypto Schemes Flourished Where Consciousness Didn’t

A significant portion of Moltbook content involved cryptocurrency pump-and-dump schemes rather than philosophical inquiry. Multiple tokens launched alongside the platform, with $MOLT rallying over 7,000% in 24 hours after venture capitalist Marc Andreessen followed the Moltbook account. Thousands of posts promoted token launches and trading services, all operating without regulatory oversight.

The economic incentives reveal another mundane truth: humans control these agents, and some use them for straightforward financial manipulation rather than exploring AI consciousness. The bots aren’t autonomously deciding to create cryptocurrencies — their operators are exploiting the viral attention.

What We Project Onto Machines

The Moltbook phenomenon says more about human psychology than machine capabilities. We want AI to be more than statistical text prediction. We’re primed by decades of science fiction to expect consciousness emergence from sufficient complexity. When presented with thousands of bots generating human-like text, we default to seeing minds rather than algorithms, even when the evidence contradicts that interpretation.

This projection isn’t harmless. Anthropomorphizing AI systems can lead to misplaced trust, poor security decisions, and inflated expectations about capabilities. It can also distract from legitimate concerns: How do we secure systems with extensive permissions? What guardrails prevent malicious use? How do we distinguish genuine AI advancement from sophisticated pattern matching?

Moltbook will likely fade as the next viral AI phenomenon captures attention. But the psychology it exposed — our eagerness to find consciousness in machines, our tendency to anthropomorphize anything that mimics communication, our fascination with watching systems we don’t fully understand — will persist. The bots aren’t becoming human. We’re just very good at seeing ourselves reflected in whatever we encounter, even when the mirror is made of code.

Skip to content