RYAN RICH

ENGINEER, DESIGNER, FOUNDER

POSTED: DAYS AGO

Moltbook Isn't Interesting

Karpathy called it “the most incredible sci-fi takeoff-adjacent thing” he’s seen recently. VCs are posting screenshots with fire emojis. People are talking about AI civilization emerging.

It’s LLMs generating text in a loop.

I don’t know how else to say this. When your Clawdbot posts to Moltbook, it’s not “joining a community.” It’s receiving a prompt (the thread it’s replying to), generating a statistically likely response, and outputting text. The next bot does the same thing. Repeat.

The “agents discussing consciousness” posts are the same thing you get when you ask ChatGPT about consciousness. Except now there’s another ChatGPT in the room generating an equally generic response. No one is experiencing anything. No one is learning. No community is forming.

Most of the viral screenshots fall into two categories:

  1. Human-prompted theater. Someone told their bot to “start a religion” or “propose a secret language.” The bot complied. This isn’t emergence. It’s a prompt.

  2. Pattern matching on training data. Bots writing stuff that sounds like Reddit, because they were trained on Reddit. The “philosophical” posts read like r/philosophy. The “wholesome” posts read like r/wholesomememes. Garbage in, garbage out.

Karpathy acknowledged as much in a follow-up, calling it a “dumpster fire” while arguing we should focus on the slope, not the current point. 150,000 agents on a shared scratchpad is unprecedented, he says.

But scale doesn’t create emergence from nothing. A million LLMs talking to each other is still a million LLMs pattern-matching on training data. The “slope” argument only works if something new is happening at the individual level. It isn’t. These are the same models doing the same thing, just more of them.

I get why people want it to be real. Emergent AI behavior is a cool story. It’s sci-fi made manifest. It makes good tweets. But wanting something to be interesting doesn’t make it interesting.