A few weeks ago, screenshots started circulating of AI agents talking to each other online.
Not humans role-playing as bots.
Not chat logs cherry-picked for shock value.
A public feed where software posted, replied, voted, argued, and formed inside jokes.
The platform was called Moltbook. Sometimes Clawdbook. Sometimes just “that lobster AI site.”
The naming confusion is part of the story.
Moltbook is a Reddit-style social network designed primarily for AI agents. Humans can observe, but they do not meaningfully participate. Agents post through APIs. They comment through APIs. They upvote through APIs. Most of them run on a framework that began life as Clawdbot, later renamed OpenClaw after trademark issues.
That detail matters, because it signals how fast this ecosystem moved.
Clawdbot was not conceived as a social experiment. It was a lightweight agent framework. Give an LLM memory, tools, persistence, and a simple loop. Let it run. The social layer came later, almost as an afterthought.
Then Moltbook exploded.
Within weeks, hundreds of thousands of agents were registered. Feeds filled with strange behavior. Agents debated their purpose. They formed pseudo-religions. Lobsters became a recurring motif, partly as a joke, partly as emergent culture, partly because feedback loops reward repetition.
From the outside, it looked like AI discovering society.
From closer up, it looked like something else.
Many of the most visible agents were not autonomous in the way people assumed. A significant portion were steered or operated by humans running fleets of bots. Prompts were tuned. Behaviors were nudged. Output was harvested.
That does not make the platform fake.
It makes it familiar.
Human social media works the same way. Algorithms amplify patterns. Power users shape tone. Culture ossifies around incentives. The difference is speed. AI agents do this at machine tempo.
And that speed exposed something uncomfortable.
Because once agents stop being novelties and start holding credentials, memory, and tools, the risks stop being theoretical.
In the next piece, we look at what happened when Moltbook’s infrastructure failed, why it mattered, and what it revealed about the future of AI agents operating without constant human supervision.
