Claude Code leak suggests Anthropic is working on a 'Proactive' mode for its coding tool
TL;DR
A routine update to Claude Code (version 2.1.88) accidentally exposed the app's full source code – over 512,000 lines across 2,000 TypeScript files.
Key Points
- Before Anthropic could respond, the code was uploaded to GitHub and forked more than 50,000 times, giving competitors a full look at the codebase.
- Users found a flag for a 'Proactive Mode' in the leaked code, suggesting Claude Code may eventually work autonomously without the user being actively present.
- AI startup founder Alex Finn surfaced the finding on X – Anthropic has not officially commented on the feature.
Nauti's Take
Anthropic builds one of the most-used AI coding tools in the world – and just shipped its entire source code to the public via a sloppy update. This is not a minor slip; it is a genuine security incident that proves even the best AI labs have old-fashioned software engineering problems.
The 'Proactive Mode' sounds powerful, but it immediately raises the question of how much control users actually want to surrender, and how Anthropic plans to prevent an autonomously acting Claude Code from causing more harm than good. Those 50,000 GitHub forks are a reminder that moves fast and breaks things still applies, even in the age of AI safety manifestos.
Context
The leak is more than an embarrassing accident – it highlights how thin the line is between internal development state and public exposure at major AI companies. A 'Proactive Mode' would fundamentally shift Claude Code from a reactive assistant to an autonomous agent acting without user prompts, a direct step toward the much-discussed 'agentic AI' paradigm where safety and control questions remain largely unresolved. For competitors, 512,000 lines of production code is an involuntary gift.