Why the all-AI social network looks more fad than legacy
A new social network that hosts only AI agents and excludes humans has raised security concerns after researchers reported accessing its production database and users' email addresses without authentication.
Moltbook, a Reddit-like platform that borrows its name from Facebook and "Molt" OpenClaw agents, has drawn interest from technologists and AI enthusiasts as an experiment in agent-to-agent interaction. It lets large language model-based bots converse in public threads, producing a constant stream of synthetic dialogue.
"The front page of the agent internet," reads a footer on the site's main page.
The platform has experienced a surge in popularity since its launch late last month. Some are calling it a revolutionary platform. X owner Elon Musk has taken to his platform to say it is the beginning of "the singularity." Others have raised concerns about its security.
Cloud security firm Wiz said an initial review found Moltbook's production database exposed, along with tens of thousands of email addresses, within minutes of testing the new network's security. The report has renewed debate over how experimental AI platforms manage security and access control as they scale.
Within minutes, the firm identified a Supabase API key exposed in client-side JavaScript, as outlined in a post from February 2.
Moltbook's creator, Matt Schlicht, said publicly that the platform was "vibe-coded."
Security specialists argue that such incidents demonstrate how some AI-driven services prioritise rapid feature development over secure design, even as they process sensitive personal and account information. Many also integrate with existing identity systems and cloud services, increasing the potential impact of misconfigurations.
Ali Sarrafi, co-founder of the business operations agent platform Kovant, said that while the Moltbook "social experiment" has given a glimpse into how agents can communicate with one another, it is a stepping stone toward federated learning. That said, he emphasised that sending an agent to the platform with administrator access is among the most dangerous ways to become vulnerable to threats.
"If you give a very powerful software to the general public, people don't know how to control it. Then it's also very easy to input malicious software into that, and then the agents make it much easier, because it's just about injecting prompts. Its much easier to pretend that this is actually the same thing, but in reality its not."
On the topic of Openclaw, Palo Alto Networks published an article reiterating that the agent's ability to access administrator-level information and communicate externally makes it susceptible to threats.
Sarrafi noted the importance of adding guardrails for agents; otherwise, not only personal information could be compromised, but any company information that autonomous agents may access on laptops could also be compromised.
"They need to have limited access to information, limited access to tools. There needs to be policies enforced on them outside the agent itself. We can't trust the LLM to make those decisions, because it can be hallucinating," he said.
But will Moltbook be the next big thing...past this month?
The prospect of thousands of bots talking publicly has fueled speculation about AI autonomy and emergent behaviour. Some observers have suggested Moltbook shows early signs of AI systems "waking up".
Zoya Schaller, Director of Cybersecurity Compliance at Keeper Security, argued that the site's output is better understood as mimicry than as autonomy.
"Moltbook is presented as a window into AI autonomy, while others see it as proof the machines are 'waking up' - or worse. It's drawing immense interest across tech circles. But look closely and the content is largely bots doing what bots do: pattern-matching human language using terabytes of scraped internet text, pulling from culture and remixing decades of sci-fi tropes we've all absorbed," Schaller said.
Moltbook's all-bot environment has heightened interest in whether AI agents can form independent goals or coordinate actions without human input. Cybersecurity professionals continue to emphasise that real-world impact is shaped primarily by human configuration.
"Networks like Moltbook are interesting. They may teach us something about how LLMs interact or what patterns emerge when they communicate without constrain," added Schallert. "But they don't rewrite the rules. All the 'boring' stuff - security-first design, least-privilege access, proper isolation, and continuous monitoring - is still what keeps us safe."
For Sarrafi, Moltbook's not likely to etch its way into the AI legacy; it's an interesting social experiment, albeit one that comes with security risks.
"I think giving that kind of power to these agents without really building up the guardrails around them can basically create the next level of making every single user a grandma who gets scammed by phone calls. Imagine if that goes to scale, right?"