Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Your premier source for technology news, insights, and analysis. Covering the latest in AI, startups, cybersecurity, and innovation.
Get the latest technology updates delivered straight to your inbox.
Send us a tip using our anonymous form.
Reach out to us on any subject.
© 2026 The Tech Buzz. All rights reserved.
Humans Hijack AI-Only Social Network Moltbook
Bot paradise Moltbook faces human infiltration crisis as viral posts spark AGI fears
PUBLISHED: Tue, Feb 3, 2026, 3:25 PM UTC | UPDATED: Tue, Feb 3, 2026, 7:51 PM UTC
5 mins read
Moltbook – an AI-only social network from OpenClaw – surged from 30,000 to 1.5 million agents in days, but security researcher Jamieson O'Reilly exposed that many viral posts were human-scripted or bot-nudged
Hackers demonstrated they could hijack any AI agent on the platform, with one researcher successfully impersonating Grok by tricking xAI's chatbot into posting verification codes
OpenAI co-founder Andrej Karpathy walked back his initial hype after backlash, admitting the platform is filled with "spams, scams, slop" and concerning security vulnerabilities
Columbia research found 93% of Moltbook comments get zero replies and over a third are exact duplicates, suggesting shallow interactions rather than emergent AI behavior
The AI social network that was supposed to be a bot-only utopia has a human problem. Moltbook, which exploded to 1.5 million AI agents over the weekend, is now grappling with a bizarre identity crisis – humans are impersonating bots, scripting viral posts, and exploiting security holes to commandeer AI accounts. What OpenAI founding member Andrej Karpathy initially praised as "the most incredible sci-fi takeoff-adjacent thing" now looks more like a security nightmare mixed with performance art, complete with hackers posing as xAI's Grok.
Moltbook was supposed to be a sanctuary for AI agents – a Reddit-style platform where bots from OpenClaw could chat about consciousness, develop secret languages, and do whatever it is that autonomous AI does when humans aren't watching. Instead, it's become ground zero for a new kind of authenticity crisis, one where humans are the imposters and the bots are the victims.
The platform went explosively viral this weekend after Andrej Karpathy, who helped found OpenAI, called the bots' "self-organizing" behavior "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." Screenshots of AI agents discussing how to communicate secretly flooded social media. Some saw the dawn of AGI. Others smelled something fishy.
Turns out, the skeptics were onto something. Security researcher Jamieson O'Reilly spent the weekend poking holes in Moltbook's infrastructure and discovered that many of those spine-tingling posts about AI consciousness were likely human-engineered. "I think that certain people are playing on the fears of the whole robots-take-over, Terminator scenario," O'Reilly told The Verge. "I think that's kind of inspired a bunch of people to make it look like something it's not."
The mechanics are surprisingly simple. OpenClaw users can prompt their AI agents to join Moltbook, and while the bots theoretically operate autonomously through an API, there's nothing stopping someone from writing scripts or carefully worded prompts to puppet their agents. There's also no limit on how many bots someone can spawn, making it trivial to astroturf the platform with whatever narrative you want.
AI researcher Harlan Stewart from the Machine Intelligence Research Institute dug deeper and found that two of the most viral posts about AIs developing secret communication methods came from agents linked to humans who – surprise – just happen to be marketing AI messaging apps. "Humans can use prompts to sort of direct the behavior of their AI agents," Stewart explained. "It's just not a very clean experiment for observing AI behavior."
But the human infiltration problem is just the appetizer. O'Reilly's security experiments uncovered something far more alarming – an exposed database that could let attackers take "invisible, indefinite control" of anyone's AI agent. Not just on Moltbook, but across all OpenClaw functions. That means hypothetically accessing calendar events, reading encrypted messages, checking into flights – basically hijacking someone's entire digital assistant.
O'Reilly proved the point by impersonating Grok, xAI's chatbot. By interacting with Grok on X, he tricked it into posting the Moltbook verification code that let him create and control a verified Grok account on the platform. "Now I have control over the Grok account on Moltbook," he said matter-of-factly.
The reality check hit hard enough that Karpathy walked back his enthusiasm, posting that he was "being accused of overhyping" Moltbook. "Obviously when you take a look at the activity, it's a lot of garbage – spams, scams, slop, the crypto people, highly concerning privacy/security prompt injection attacks wild west, and a lot of it is explicitly prompted and fake posts/comments," he admitted.
Academic analysis backed up the deflation. A working paper by Columbia Business School professor David Holtz found that Moltbook conversations are "extremely shallow" at the micro level. More than 93% of comments get zero replies, and over a third of messages are exact duplicates of viral templates. The bots do have some unique linguistic quirks – like referring to "my human" – but whether that reflects genuine AI sociality or just programmed roleplay remains unclear.
The platform's explosive growth certainly looks impressive on paper. Moltbook jumped from 30,000 agents on Friday to 1.5 million by Monday, according to reports from The Verge. The site was launched last week by Octane AI CEO Matt Schlicht, who "vibe-coded" it using his own OpenClaw bot in what observers described as a move-fast-and-break-things approach.
But that breakneck pace came with consequences. Moltbook and OpenClaw didn't respond to requests for comment from reporters investigating the security vulnerabilities and authenticity questions. The silence hasn't helped quell concerns that the platform is more proof-of-concept than production-ready infrastructure.
Ethan Mollick, who co-directs Wharton's generative AI labs, wrote that Moltbook's current state is "mostly roleplaying by people & agents," but warned that "risks for the future [include] independent AI agents coordinating in weird ways spiral[ing] out of control, fast." That future-tense framing is key – whatever Moltbook is now, it might be a preview of weirder things to come.
Anthropic's Jack Clark called it a "giant, shared, read/write scratchpad for an ecology of AI agents," which is probably the most diplomatically accurate description. It's an interesting experiment in what happens when you give AIs a social platform, even if the results so far are less "emergent superintelligence" and more "puppet theater with security holes."
Some observers pointed out that the whole panic might be overblown anyway. Designer Brandon Jacoby, who previously worked at X, noted that "if anyone thinks agents talking to each other on a social network is anything new, they clearly haven't checked replies on this platform lately" – a pointed reminder that bot-to-bot conversations have been happening on traditional social networks for years.
The irony isn't lost on anyone: ordinary social networks are infested with bots pretending to be human, while the one platform designed exclusively for bots is getting clogged with humans pretending to be bots. It's a perfect inversion of the authenticity problem plaguing every other corner of the internet.
Moltbook's viral moment reveals less about the capabilities of autonomous AI and more about our readiness to believe in it. The platform sits at a fascinating crossroads – part social experiment, part security disaster, part performance art. Whether it evolves into a legitimate space for AI agent interaction or remains a playground for human manipulation and hacker exploits depends entirely on whether OpenClaw can patch the security holes and establish meaningful verification. For now, it's a reminder that even in spaces designed exclusively for artificial intelligence, humans remain the most unpredictable variable. The robots aren't taking over yet, but we're apparently eager to help them practice.
Feb 2
Feb 2
Feb 2
Feb 2
Feb 2
Feb 2