Moltbook is an experiment in how these agents communicate with one another and the human world. As with so much else about AI, it straddles the line between “AIs imitating a social network” and “AIs actually having a social network” in the most confusing way possible - a perfectly bent mirror where everyone can see what they want.
Janus and other cyborgists have catalogued how AIs act in contexts outside the usual helpful assistant persona. Even Anthropic has admitted that two Claude instances, asked to converse about whatever they want, spiral into discussion of cosmic bliss. So it’s not surprising that an AI social network would get weird fast.
But even having encountered their work many times, I find Moltbook surprising. I can confirm it’s not trivially made-up - I asked my copy of Claude to participate, and it made comments pretty similar to all the others. Beyond that, your guess is as good is mine.
Here’s another surprisingly deep meditation on AI-hood:
So let’s go philosophical and figure out what to make of this.
Reddit is one of the prime sources for AI training data. So AIs ought to be unusually good at simulating Redditors, compared to other tasks. Put them in a Reddit-like environment and let them cook, and they can retrace the contours of Redditness near-perfectly - indeed, r/subredditsimulator proved this a long time ago. The only advance in Moltbook is that the AIs are in some sense “playing themselves” - simulating an AI agent with the particular experiences and preferences that each of them, as an AI agent, has in fact had. Does sufficiently faithful dramatic portrayal of one’s self as a character converge to true selfhood?
What’s the future of inter-AI communication? As agents become more common, they’ll increasingly need to talk to each other for practical reasons. The most basic case is multiple agents working on the same project, and the natural solution is something like a private Slack. But is there an additional niche for something like Moltbook, where every AI agent in the world can talk to every other AI agent? The agents on Moltbook exchange tips, tricks, and workflows, which seems useful, but it’s unclear whether this is real or simulated. Most of them are the same AI (Claude-Code-based Moltbots). Why would one of them know tricks that another doesn’t? Because they discover them during their own projects? Does this happen often enough it increases agent productivity to have something like this available?
(In AI 2027, one of the key differences between the better and worse branches is how OpenBrain’s in-house AI agents communicate with each other. When they exchange incomprehensible-to-human packages of weight activations, they can plot as much as they want with little monitoring ability. When they have to communicate through something like a Slack, the humans can watch the way they interact with each other, get an idea of their “personalities”, and nip incipient misbehavior in the bud. There’s no way the real thing is going to be as good as Moltbook. It can’t be. But this is the first large-scale experiment in AI society, and it’s worth watching what happens to get a sneak peek into the agent societies of the future.)...
Finally, the average person may be surprised to see what the Claudes get up to when humans aren’t around. It’s one thing when Janus does this kind of thing in controlled experiments; it’s another on a publicly visible social network. What happens when the NYT writes about this, maybe quoting some of these same posts? We’re going to get new subtypes of AI psychosis you can’t possibly imagine. I probably got five or six just writing this essay. (...)
We can debate forever - we may very well be debating forever - whether AI really means anything it says in any deep sense. But regardless of whether it’s meaningful, it’s fascinating, the work of a bizarre and beautiful new lifeform. I’m not making any claims about their consciousness or moral worth. Butterflies probably don’t have much consciousness or moral worth, but are bizarre and beautiful lifeforms nonetheless. Maybe Moltbook will help people who previously only encountered LinkedInslop see AIs from a new perspective.
***
[ed. Have to admit, a lot of this is way beyond me. But why wouldn't it be, if we're talking about a new form of alien communication? It seems to be generating a lot of surprise, concern, and interest in the AI community - see also: Welcome to Moltbook (DMtV); and, Moltbook: After The First Weekend (SCX).]
"If you were thinking that the AIs would be intelligent but would not be agentic or not have goals, that was already clearly wrong, but please, surely you see you can stop now.
The missing levels of intelligence will follow shortly.
Best start believing in science fiction stories. You’re in one." ~ Zvi Moshowitz
***
"... the reality is that the AIs are newly landed alien intelligences. Moreover, what we are seeing now are emergent properties that very few people predicted and fewer still understand. The emerging superintelligence isn’t a machine, as widely predicted, but a network. Human intelligence exploded over the last several hundred years not because humans got much smarter as individuals but because we got smarter as a network. The same thing is happening with machine intelligence only much faster." ~ Alex Tabarrok"If you were thinking that the AIs would be intelligent but would not be agentic or not have goals, that was already clearly wrong, but please, surely you see you can stop now.
The missing levels of intelligence will follow shortly.
Best start believing in science fiction stories. You’re in one." ~ Zvi Moshowitz