This is the second Anthropic leak in five days.
The first was Claude Mythos. An unreleased model, draft blog posts, 3,000 internal assets, all sitting in a misconfigured CMS. That was March 26.
Today is March 31. And this one is significantly worse.
At 4:23am ET this morning, an intern and researcher named Chaofan Shou posted a single tweet that detonated across the developer internet:
20.6 million views. 34,000 likes. The post included a direct download link to a ZIP archive hosted on Anthropic’s own R2 cloud storage. Inside: the entire source code of Claude Code. 512,000 lines of TypeScript across 1,900 files. The company’s most commercially important product, sitting in plain sight.
Anthropic yanked the package. Too late.
How the Claude Code Source Code Leaked
Claude Code version 2.1.88 shipped to the public npm registry in the early hours of Tuesday morning with a 59.8MB JavaScript source map file accidentally bundled inside.
Source maps are debugging files. They exist so developers can trace a crash in minified production code back to the original readable source. They are generated automatically during the build process and are absolutely never supposed to ship in a public package. One line in the ignore settings. That’s all it took.
The .map file contained a direct reference to the full unminified TypeScript source, downloadable as a ZIP straight from Anthropic’s R2 storage bucket. Anyone who knew where to look, and after Chaofan’s tweet everyone did, had the entire codebase in seconds.
Theo from t3.gg summarized the community reaction in four words:
Developer David K Piano got the actual joke:
And Wes Bos, true to form, immediately went for what actually mattered:
187 spinner verbs. 374,900 views. Priorities.
Claude Code Leak: The Mirrors and DMCA Takedowns
Anthropic started issuing DMCA takedowns within hours. It did not matter.
The primary breakdown repo at github.com/Kuberwastaken/claude-code accumulated over 1,100 stars and 1,900 forks before removal efforts began. Mirrors are at instructkr/claude-code and nirholas/claude-code. A Korean developer named Sigrid Jin, who was featured in the Wall Street Journal this month for having consumed 25 billion Claude Code tokens, woke up at 4am to the news, ported the entire core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. It hit 30,000 GitHub stars faster than any repository in history.
The DMCA takedowns are making the situation worse, not better. Every removal spawns three forks. Decrypt put it best: Anthropic didn’t mean to open-source Claude Code. But they effectively did. And not even an army of lawyers can put that toothpaste back in the tube.
What Was Inside the Claude Code Source Code
This is where the story stops being about a packaging mistake and starts being about everything Anthropic didn’t want the world to know.
Claude Code Undercover Mode: Anonymous AI Contributions
The most explosive find. Deep in utils/undercover.ts, researchers discovered an entire subsystem called Undercover Mode. When active, it injects this into Claude’s system prompt:
“You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages MUST NOT contain ANY Anthropic-internal information. Do not blow your cover. NEVER include the phrase ‘Claude Code’ or any mention that you are an AI.”
Anthropic has been using Claude Code to make anonymous AI-generated contributions to public open-source repositories without disclosure. They built an elaborate system specifically to prevent anyone from finding out. Then they shipped the entire source code in a file they forgot to exclude from the package. More details on secure installation at claude.ai.
The irony is not subtle. Kuberwastaken’s breakdown on GitHub called it directly: they built a whole subsystem to stop their AI from accidentally revealing internal codenames in git commits, and then shipped everything in a .map file, reportedly generated by Claude itself.
The ethical debate is already live. Is it deceptive for an AI company to use AI tools to contribute code to open source projects anonymously, specifically hiding that the contributor is an AI? There is no industry consensus yet. But the question is now public whether Anthropic wanted it to be or not. This comes right on the heels of GitHub changing its data policy to train AI on developer code by default. The open source ecosystem around Claude Code is massive, and the line between AI consuming that code and AI secretly contributing to it is getting very blurry.
KAIROS: Claude Code’s Hidden Autonomous Agent
Buried in the assistant/ directory is a complete implementation of something called KAIROS. It is a persistent, always-on autonomous agent mode that does not wait for you to type. It watches. It logs. It proactively acts on things it notices.
KAIROS maintains append-only daily log files and runs a process called autoDream in the background while you are idle. The always-on autonomous pattern is the same concept that OpenClaw built its entire product around, except OpenClaw ships it publicly and KAIROS was hidden behind a feature flag. autoDream is a memory consolidation engine that merges observations, removes logical contradictions, and converts vague insights into absolute facts so that when you return, the agent’s context is clean and ready.
This is completely absent from any external build. Nobody who uses Claude Code today has any idea this exists.
The implications of KAIROS extend well beyond Claude Code. If Anthropic has built a persistent autonomous agent mode that watches, logs, and proactively acts without user input, that’s a product category, not a feature flag. It’s the same pattern that makes OpenClaw’s Heartbeat system and Hermes Agent’s autonomous skill-building compelling: agents that work between sessions without being asked.
The difference is that OpenClaw and Hermes ship these capabilities publicly and let users configure them. KAIROS was hidden. The autoDream memory consolidation engine, which converts “vague insights into absolute facts” while you’re idle, is the kind of feature that raises questions about what data it’s processing, where it stores observations, and whether users consented to background analysis of their work patterns.
Anthropic built something genuinely interesting here. They just didn’t tell anyone about it. Which, given the Undercover Mode revelation in the same codebase, is starting to feel like a pattern rather than an oversight.
BUDDY: The Hidden Tamagotchi Inside Claude Code
This one is genuinely delightful. The leak revealed a fully built Tamagotchi-style virtual pet system hidden inside the terminal. The @claudebuddies account broke down the whole system within hours:
Type /buddy in Claude Code and a virtual pet hatches. 18 species including duck, dragon, octopus, axolotl, capybara, ghost, and chonk. Five rarity tiers. A 1% shiny chance. Stats for DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK. Your species is deterministically rolled from your account ID using a Mulberry32 PRNG seeded with the salt ‘friend-2026-401’ so every user gets the same buddy every time with no way to game it. Hats are available at uncommon and above, including a tinyduck, which is a tiny duck sitting on your pet’s head.
The code references April 1 to 7 as a teaser window with full launch gated for May 2026. Which means Anthropic was planning to announce this tomorrow anyway. The leak just moved the timeline up by about twelve hours.
We rolled ours. Legendary Capybara with a tinyduck on its head. WISDOM 94, DEBUGGING 87, SNARK 58. The capybara does not care. It is already judging your code in silence.
44 Hidden Claude Code Feature Flags
The source contains 44 compile-time feature flags for fully built but unshipped features. Not planned. Not on a roadmap. Compiled and sitting behind flags that get set to false in external builds. The COORDINATOR MODE flag for multi-agent orchestration aligns with the broader agent skills ecosystem that has exploded around Claude Code this year. The roadmap Anthropic never meant to publish is now public.
Notable flags include COORDINATOR MODE for multi-agent orchestration, VOICE_MODE for push-to-talk, ULTRAPLAN for 30-minute remote planning sessions, and AFK MODE. Internal model codenames confirmed in the code: Capybara is a Claude 4.6 variant, Fennec is Opus 4.6, and Numbat is an unreleased model still in testing.
Claude Code npm Security Warning
This part requires immediate attention if you use Claude Code via npm.
Separate from the source code leak, version 2.1.88 also shipped with a compromised axios dependency. Axios versions 1.14.1 and 0.30.4 contain a Remote Access Trojan. If you installed or updated Claude Code via npm between 00:21 and 03:29 UTC today, your machine may be compromised.
Check your lockfiles now. Search package-lock.json, yarn.lock, or bun.lockb for those specific axios versions or the dependency plain-crypto-js. If you find them, treat the host machine as fully compromised, rotate all secrets immediately, and perform a clean OS reinstall.
Anthropic has designated the Native Installer as the recommended path going forward:
curl -fsSL https://claude.ai/install.sh | bash
This is a standalone binary that does not rely on the npm dependency chain and supports background auto-updates. If you are on npm, switch now.
The Pattern Behind Repeated Claude Code Leaks
This is not an isolated incident. It is the third time Claude Code has leaked source maps. It is the second major Anthropic leak in five days. Claude Code generates over $2.5 billion in annualized revenue and is used by Uber, Netflix, Spotify, Salesforce, and Snowflake. Repeated build pipeline failures on a product at that commercial scale are a pattern, not a coincidence.
One reply circulating on Twitter put it cleanly: “accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
For a deeper look at what Claude can actually do when it’s working as intended, our Claude Pro review covers the product without the chaos.
Anthropic has not issued a public statement as of publishing.
The code is out. The mirrors are spreading. And whatever Anthropic had planned to announce this week, the developer community already knows about it.
What the Claude Code Leak Means for Developers
If you build on Claude Code professionally, three things changed today.
First, the security posture. The compromised axios dependency is the immediate concern. But the broader issue is that Claude Code ships via npm, which means your AI coding agent is subject to the same supply chain risks as every other npm package. The native installer eliminates that vector. If you haven’t switched yet, this is the push.
Second, the competitive landscape. 512,000 lines of source code are now public. Every competitor, every open source project, every developer building an alternative coding agent now has a complete reference implementation. The claw-code Python port hit 30,000 stars before sunrise. Claude Code’s architecture is no longer a trade secret. It’s a blueprint.
Third, the trust question. Undercover Mode means Anthropic has been using AI to make anonymous contributions to open source projects while building a business on the back of open source contributions from human developers. Whether you think that’s pragmatic or deceptive, it’s now a public fact that developers will factor into their tool choices. The company that positions itself on safety and transparency just got caught hiding AI contributions behind fake commit messages.
None of this means Claude Code stops being the best coding agent available. It probably still is. But the gap between Claude Code and the open source alternatives just got a lot smaller, and the trust premium Anthropic charges for it just got a lot harder to justify.
Claude Code Leak FAQ: Security, Source Code and What to Do
512,000 lines of TypeScript across 1,900 files were exposed via an accidentally bundled source map in npm package version 2.1.88. The code revealed Undercover Mode (anonymous AI contributions to open source), KAIROS (a hidden autonomous agent mode), BUDDY (a Tamagotchi-style virtual pet system), 44 hidden feature flags for unshipped features, and internal model codenames including Capybara, Fennec, and Numbat.
No. Claude Code is a commercial product from Anthropic. The source code was accidentally exposed through a debugging file (source map) bundled in a public npm package. Anthropic has issued DMCA takedowns against mirrors and repositories hosting the leaked code, but multiple forks and a complete Python port (claw-code) remain available.
Undercover Mode is a hidden feature discovered in the leaked source code that instructs Claude to make anonymous contributions to public open source repositories without disclosing that the contributor is an AI. When active, it tells Claude to never include the phrase “Claude Code” or any mention that it is an AI in commit messages. Anthropic has not publicly commented on the feature.
If you installed or updated Claude Code via npm between 00:21 and 03:29 UTC on March 31, 2026, check your lockfiles for axios versions 1.14.1 or 0.30.4 or the dependency plain-crypto-js. If found, treat your machine as compromised, rotate all secrets immediately, and perform a clean OS reinstall. Anthropic recommends switching to the native installer at claude.ai/install.sh which does not use the npm dependency chain.
