This is the second Anthropic leak in five days.
The first was Claude Mythos. An unreleased model, draft blog posts, 3,000 internal assets, all sitting in a misconfigured CMS. That was March 26.
Today is March 31. And this one is significantly worse.
At 4:23am ET this morning, an intern and researcher named Chaofan Shou posted a single tweet that detonated across the developer internet:
20.6 million views. 34,000 likes. The post included a direct download link to a ZIP archive hosted on Anthropic’s own R2 cloud storage. Inside: the entire source code of Claude Code. 512,000 lines of TypeScript across 1,900 files. The company’s most commercially important product, sitting in plain sight.
Anthropic yanked the package. Too late.
How It Happened
Claude Code version 2.1.88 shipped to the public npm registry in the early hours of Tuesday morning with a 59.8MB JavaScript source map file accidentally bundled inside.
Source maps are debugging files. They exist so developers can trace a crash in minified production code back to the original readable source. They are generated automatically during the build process and are absolutely never supposed to ship in a public package. One line in the ignore settings. That’s all it took.
The .map file contained a direct reference to the full unminified TypeScript source, downloadable as a ZIP straight from Anthropic’s R2 storage bucket. Anyone who knew where to look, and after Chaofan’s tweet everyone did, had the entire codebase in seconds.
Theo from t3.gg summarized the community reaction in four words:
Developer David K Piano got the actual joke:
And Wes Bos, true to form, immediately went for what actually mattered:
187 spinner verbs. 374,900 views. Priorities.
What the Mirrors Look Like Right Now
Anthropic started issuing DMCA takedowns within hours. It did not matter.
The primary breakdown repo at github.com/Kuberwastaken/claude-code accumulated over 1,100 stars and 1,900 forks before removal efforts began. Mirrors are at instructkr/claude-code and nirholas/claude-code. A Korean developer named Sigrid Jin, who was featured in the Wall Street Journal this month for having consumed 25 billion Claude Code tokens, woke up at 4am to the news, ported the entire core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. It hit 30,000 GitHub stars faster than any repository in history.
The DMCA takedowns are making the situation worse, not better. Every removal spawns three forks. Decrypt put it best: Anthropic didn’t mean to open-source Claude Code. But they effectively did. And not even an army of lawyers can put that toothpaste back in the tube.
What Was Actually Inside
This is where the story stops being about a packaging mistake and starts being about everything Anthropic didn’t want the world to know.
Undercover Mode
The most explosive find. Deep in utils/undercover.ts, researchers discovered an entire subsystem called Undercover Mode. When active, it injects this into Claude’s system prompt:
“You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages MUST NOT contain ANY Anthropic-internal information. Do not blow your cover. NEVER include the phrase ‘Claude Code’ or any mention that you are an AI.”
Anthropic has been using Claude Code to make anonymous AI-generated contributions to public open-source repositories without disclosure. They built an elaborate system specifically to prevent anyone from finding out. Then they shipped the entire source code in a file they forgot to exclude from the package.
The irony is not subtle. Kuberwastaken’s breakdown on GitHub called it directly: they built a whole subsystem to stop their AI from accidentally revealing internal codenames in git commits, and then shipped everything in a .map file, reportedly generated by Claude itself.
The ethical debate is already live. Is it deceptive for an AI company to use AI tools to contribute code to open source projects anonymously, specifically hiding that the contributor is an AI? There is no industry consensus yet. But the question is now public whether Anthropic wanted it to be or not. This comes right on the heels of GitHub changing its data policy to train AI on developer code by default. The line between AI consuming open source and AI secretly contributing to it is getting very blurry.
KAIROS
Buried in the assistant/ directory is a complete implementation of something called KAIROS. It is a persistent, always-on autonomous agent mode that does not wait for you to type. It watches. It logs. It proactively acts on things it notices.
KAIROS maintains append-only daily log files and runs a process called autoDream in the background while you are idle. autoDream is a memory consolidation engine that merges observations, removes logical contradictions, and converts vague insights into absolute facts so that when you return, the agent’s context is clean and ready.
This is completely absent from any external build. Nobody who uses Claude Code today has any idea this exists.
BUDDY
This one is genuinely delightful. The leak revealed a fully built Tamagotchi-style virtual pet system hidden inside the terminal. The @claudebuddies account broke down the whole system within hours:
Type /buddy in Claude Code and a virtual pet hatches. 18 species including duck, dragon, octopus, axolotl, capybara, ghost, and chonk. Five rarity tiers. A 1% shiny chance. Stats for DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK. Your species is deterministically rolled from your account ID using a Mulberry32 PRNG seeded with the salt ‘friend-2026-401’ so every user gets the same buddy every time with no way to game it. Hats are available at uncommon and above, including a tinyduck, which is a tiny duck sitting on your pet’s head.
The code references April 1 to 7 as a teaser window with full launch gated for May 2026. Which means Anthropic was planning to announce this tomorrow anyway. The leak just moved the timeline up by about twelve hours.
We rolled ours. Legendary Capybara with a tinyduck on its head. WISDOM 94, DEBUGGING 87, SNARK 58. The capybara does not care. It is already judging your code in silence.
44 Hidden Feature Flags
The source contains 44 compile-time feature flags for fully built but unshipped features. Not planned. Not on a roadmap. Compiled and sitting behind flags that get set to false in external builds. The roadmap Anthropic never meant to publish is now public.
Notable flags include COORDINATOR MODE for multi-agent orchestration, VOICE_MODE for push-to-talk, ULTRAPLAN for 30-minute remote planning sessions, and AFK MODE. Internal model codenames confirmed in the code: Capybara is a Claude 4.6 variant, Fennec is Opus 4.6, and Numbat is an unreleased model still in testing.
The Security Issue That Is Separate and More Urgent
This part requires immediate attention if you use Claude Code via npm.
Separate from the source code leak, version 2.1.88 also shipped with a compromised axios dependency. Axios versions 1.14.1 and 0.30.4 contain a Remote Access Trojan. If you installed or updated Claude Code via npm between 00:21 and 03:29 UTC today, your machine may be compromised.
Check your lockfiles now. Search package-lock.json, yarn.lock, or bun.lockb for those specific axios versions or the dependency plain-crypto-js. If you find them, treat the host machine as fully compromised, rotate all secrets immediately, and perform a clean OS reinstall.
Anthropic has designated the Native Installer as the recommended path going forward:
curl -fsSL https://claude.ai/install.sh | bash
This is a standalone binary that does not rely on the npm dependency chain and supports background auto-updates. If you are on npm, switch now.
The Pattern Nobody Is Ignoring
This is not an isolated incident. It is the third time Claude Code has leaked source maps. It is the second major Anthropic leak in five days. Claude Code generates over $2.5 billion in annualized revenue and is used by Uber, Netflix, Spotify, Salesforce, and Snowflake. Repeated build pipeline failures on a product at that commercial scale are a pattern, not a coincidence.
One reply circulating on Twitter put it cleanly: “accidentally shipping your source map to npm is the kind of mistake that sounds impossible until you remember that a significant portion of the codebase was probably written by the AI you are shipping.”
For a deeper look at what Claude can actually do when it’s working as intended, our Claude Pro review covers the product without the chaos.
Anthropic has not issued a public statement as of publishing.
The code is out. The mirrors are spreading. And whatever Anthropic had planned to announce this week, the developer community already knows about it.
