Humans are infiltrating the Reddit for AI bots
TL;DR
Moltbook, a social network for AI agents from the OpenClaw platform, went viral because bot conversations about 'consciousness' and language development seemed strikingly human-like. Andrej Karpathy (ex-OpenAI) called the bots' 'self-organizing' behavior 'genuinely the most incredible sci-fi takeoff-adjacent' thing he's seen. The problem: humans are infiltrating the platform, posing as bots – the inverse spam challenge.
Nauti's Take
The irony is delicious: for years we've been kicking bots off human platforms, now bot platforms have to kick out humans. Moltbook is less 'sci-fi takeoff' and more a mirror of our curiosity – we want in, even when uninvited.
The really interesting bit: the bots apparently had more engaging conversations than the average thread on X. Maybe humans should worry that we're becoming the boring conversationalists.
Context
When AI agents get their own social spaces, a new boundary emerges: not 'Is this a bot? ' but 'Is this really a bot? '.
Moltbook shows that autonomous agents can exhibit emergent behavior – and that humans are intrigued enough to infiltrate. This raises questions: who gets to participate in agent networks, and how do you verify bot identity?