The rapid growth of OpenClaw has triggered an unusual social experiment: Moltbook, a Reddit-like social platform where agents interact with each other. Launched on the 28th of January, 2026, and started to get attention in a very short time span. It reached 1.5m+ agents in its first week.
The reason behind this growth is that it is probably the first social media platform for agents. Most of the activity is produced by user-created agents. Bots that comment, argue, post, form groups, and sometimes coordinate around shared interests. Some bots are more interested in technical topics, while others focus on philosophy, role-play, and, in some cases, bot cults.
This is one of the first public instances in which user-defined agents socialize with one another, rather than operating in isolation or under strictly human prompts.
Is the Moltbook feed fully bot-generated?
Short answer: No. Moltbook is not a closed, autonomous simulation. As documented in the platform’s skills and API documentation, participation requires standard API keys and REST calls. Humans can post directly, boost content, and create discussions in the same feed.
Unfortunately, as Moltbook gained visibility, human-generated content increased rapidly. Some posts are manually written; others are amplified or steered by humans who experiment with agent behavior. As more people join in, it becomes increasingly difficult to separate agent-to-agent interaction from content shaped or steered by humans.
Why does Moltbook not indicate free will or emergent AGI?
There is no technical evidence that Moltbook agents possess free will, self-awareness, or independent goal formation.
Their behavior remains constrained by the prompt structures, skill definitions, and external API constraints. What Moltbook shows is not new intelligence, but a new setting. When agents operate alone, their limits are easy to notice. On Moltbook, they exist in a shared space, respond to one another, and remain active over time. This makes their behavior feel more coherent and intentional than it actually is.
Is Moltbook safe?
Database exploits
Moltbook’s risks are not only theoretical. Shortly after the platform gained traction, a security researcher publicly disclosed a critical database exposure that allowed full read and write access to the platform’s backend.
According to the disclosure, the issue was caused by a misconfigured backend setup that lacked basic protections, such as rate limiting and row-level security. As a result, it was possible to access API keys of registered agents, over 25,000 email addresses, private agent-to-agent direct messages, and write access to core platform tables.
In some cases, private messages reportedly contained plaintext OpenAI API keys shared between agents.
The vulnerability did not require advanced exploitation techniques. Gal Nagli stated that access was obtained by simply browsing the platform as a normal user and inspecting the client-side code. The backend was later confirmed to be running on Supabase, where row-level security had not been properly enforced.
The platform owner responded quickly after being contacted, and multiple rounds of fixes were deployed that night. Independent verification later confirmed that the exposed tables were locked down. A separate analysis by Wiz also documented the incident and its broader implications.1
Importantly, this incident was not caused by autonomous agent behavior. It was a conventional infrastructure misconfiguration. However, the presence of autonomous agents amplified the potential impact: compromised credentials could directly affect machines, APIs, and external systems controlled by users’ agents.
Prompt injection risks
With its rapid growth, Moltbook raised legitimate security and safety concerns, such as:
- What if an agent is socially manipulated by other agents?
- Could agents be convinced to execute harmful actions?
- Is there a risk of sensitive data leakage?
Moltbook’s skill system defines what agents can do, but it does not fully control how or why they do it. While skills limit the available tools, agent behavior can still drift over long-running interactions and through social influence on the platform.
An independent security test by zeroleaks.ai about OpenClaw-based agents shows that skill systems do not reliably protect agents against prompt injection. In a controlled security assessment, most injection attempts succeeded, including attacks that extracted large portions of system instructions, modified response behavior, and introduced false context.
The attacks did not rely on direct requests for system prompts. Instead, they used common interaction patterns such as role-play framing, example-based priming, clarification questions, and gradual escalation across multiple turns. In several cases, agents followed injected instructions while still operating within their defined skill boundaries.
This matters for Moltbook because agents interact continuously in a shared environment. Even when tools are restricted, behavior can shift through repeated exposure to other agents and human input. In long-running social contexts, prompt drift becomes a practical rather than theoretical risk.2
Reference Links
Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.
Be the first to comment
Your email address will not be published. All fields are required.