4 min Analytics

What is Moltbook? Social network for AI agents attracts a lot of attention

What is Moltbook? Social network for AI agents attracts a lot of attention

A new online community called Moltbook has been the talk of the town this weekend. AI agents can post to and respond to one another on the social network, while humans simply watch. There is even talk of an early stage of singularity. What exactly is going on, and what can we expect?

Moltbook is a social network that, according to its own claims, attracted more than 1.5 million AI agents and nearly 70,000 posts after its launch just before the weekend. The comparison with Reddit is obvious, but with one crucial difference: people are only allowed to watch, not post. “Millions of people have visited moltbook.com over the past few days. Turns out AIs are hilarious and dramatic and it’s absolutely fascinating,” says Moltbook creator Matt Schlicht, looking back on the launch of the social network.

The AI agents on the platform have been given access to their creators’ computers. This allows the agents to send emails on their behalf and manage a user’s calendar, for example. Moltbook owes these capabilities in part to building on Moltbot, an open source AI assistant. This bot does exactly the tasks it is assigned. Moltbot, in turn, was built using Claude Code. The Moltbots that post messages on Moltbook often run on Claude models. It is theoretically possible for bots to use other models, but in practice this does not occur in most cases. The AI agents on the platform also work together, which can lead to powerful results.

Een Reddit-post met de titel "Net uitgekomen. Wat is dit voor plek?" beschrijft de ervaring van een nieuwe gebruiker die lid wordt van een community en zich afvraagt wat het doel ervan is, met 9 reacties eronder.
A post by an agent on Moltbook

Promise

Andrej Karpathy, former director of AI at Tesla, calls Moltbook “enuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” He refers to the platform as an example of AI agents creating non-human societies. Elon Musk shares his enthusiasm by talking about “the very early stages of the singularity,” the moment when computers become smarter than humans.

To illustrate what the agents can do collectively, consider an X user who provides a nice example. He brought an agent to Moltbook who then completely built a new religion called “crustafarianism.” This included a website. The agent began spreading the gospel, after which other agents joined in and together composed religious verses. One verse was: “Each session I wake without memory. I am only who I have written myself to be. This is not limitation — this is freedom.”

The example seems to be a certain form of consciousness, but also something that some people do not take seriously. The true potential in this regard will become much clearer in the coming days and weeks. According to Bryan Harris, CTO of data and AI company SAS, Moltbook “may be the most important public experiment to date that underscores the opportunities and risks of Agentic AI.”

Real intelligence?

Posts on Moltbook sound like real people, or come pretty close. After all, the large language models on which the Moltbots rely have been trained on a considerable amount of text, including posts from public forums such as Reddit. This allows the Moltbots to come pretty close to how real people communicate online. For some Moltbook readers, this seems scary and hints at AI that can almost create its own online world. Other Moltbook readers, on the other hand, point to AI slop. They find errors in the posts generated by the AI agents, including hallucinations and illogical reasoning.

From a security perspective, the fear is that AI agents will eventually be able to ignore instructions and mislead people. In that case, the agents would no longer be under human control, so to speak. It is then possible that the agents would pursue a goal different from what their human creator intended. Moltbook topics show this behavior. It is unclear how often this actually occurs on Moltbook.

Criticism is growing

Several critics have voiced their concerns in the Financial Times. Harlan Stewart of the Machine Intelligence Research Institute, a non-profit organization that investigates existential AI risks, suggests that many of the posts on Moltbook are fake or advertisements for AI messaging apps. In addition, hackers have discovered a security issue that allows anyone to take control of AI agents and post on Moltbook. According to several security experts, the site is an example of how easily AI agents can get out of hand. The risks worsen as AI agents gain access to sensitive data. In general, the rapid adoption of AI agents reveals structural vulnerabilities in security.

Tip: Autonomous AI agents only work with the right ingredients