Best for: Anyone following the AI model race who wants the full story on Anthropic’s most powerful model, why they’re afraid of it, and what the leak means in the context of the Pentagon dispute and IPO timeline.
Not ideal for: If you’re looking for hands-on usage details or benchmarks, Mythos isn’t publicly available yet. This is the backstory of how the world found out about it.
Anthropic left nearly 3,000 internal documents sitting in a public database. Someone found them. Inside: their most powerful model yet, a new tier above Opus, and a warning that even they think it might be too dangerous to release.
Nobody at Anthropic planned to announce this.
On Thursday night, security researchers discovered that Anthropic had accidentally left a trove of unpublished internal documents sitting in a publicly accessible data store, unlocked, searchable, and available to anyone who looked. Fortune‘s Bea Nolan found them first. Inside was a draft blog post describing a model Anthropic calls Claude Mythos. The company confirmed it’s real.
The short version: it’s the most powerful AI model Anthropic has ever built, it sits in an entirely new tier above Opus, and Anthropic themselves wrote in the leaked documents that it poses “unprecedented cybersecurity risks.”
They weren’t planning to tell you about it yet. Now they don’t have a choice.
How the Claude Mythos Leak Happened
The exposure came down to a basic CMS misconfiguration. Anthropic’s content management system defaults to making uploaded files publicly accessible unless someone manually changes that setting. Nobody changed it. Nearly 3,000 unpublished assets ended up sitting in a publicly searchable data store, including draft blog posts, images, PDFs, and internal documents that were never meant to see daylight.
Two cybersecurity researchers found it. Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge independently assessed the documents after Fortune flagged them. The story spread fast. Within hours it was one of the most discussed AI stories on X, with researchers, investors, and AI accounts breaking down what the documents meant.
Anthropic restricted access once contacted, attributed the incident to “human error,” and confirmed to Fortune that yes, Claude Mythos is real and they’re already testing it with a small group of early access customers.
The irony of a company whose entire identity is built around AI safety accidentally leaving its most sensitive internal documents wide open to the public is not lost on anyone. Five days later, the Claude Code source code leaked via npm, making this the second major Anthropic security failure in a single week.
What Is Claude Mythos?
Right now Anthropic’s model lineup goes Haiku, Sonnet, Opus. Haiku is fast and cheap. Sonnet is the everyday workhorse. Opus is the flagship, the most capable thing they publicly offer. We did a full review of Claude Pro covering what the current lineup actually delivers for regular people.
Mythos doesn’t fit in any of those tiers. It’s above all of them.
The leaked draft describes a new tier called Capybara, which appears to be the internal product name for the tier while Mythos is the specific model within it. Both names appear in two versions of the same draft blog post, the only difference being which name was swapped throughout. Anthropic told Fortune the documents were “early drafts of content being considered for publication,” which suggests they were still deciding between names. Based on the naming logic Anthropic has always used, Haiku, Sonnet, Opus, Mythos all follow the same pattern around the weight and depth of written form. Capybara does not. Mythos is almost certainly the name that ships.
The naming is worth pausing on because it reveals Anthropic’s internal product architecture. The Claude Code source code leak five days later confirmed that Capybara is used internally as a Claude 4.6 variant codename, while Fennec maps to Opus 4.6 and Numbat is an unreleased model still in testing. Mythos sits above all of these.
If Anthropic follows their existing pattern, Mythos would eventually slot into the API alongside Haiku, Sonnet, and Opus as a fourth tier, presumably at a significantly higher price point. The leaked documents describe it as “very expensive for us to serve” which suggests per-token pricing several multiples above Opus. For reference, Opus 4.6 currently runs about $15 per million output tokens. Mythos could be $50 or higher, putting it firmly in enterprise-only territory even if it were publicly available.
What this model can do, according to the drafts: dramatically higher scores than Claude Opus 4.6 on software coding, academic reasoning, and cybersecurity. No specific benchmark numbers were included in the leaked material, but the language in the documents is not subtle. Anthropic’s spokesperson confirmed to Fortune it’s a “step change” and “the most capable we’ve built to date.”
Claude Mythos Cybersecurity Warning: Why Anthropic Is Scared
Here’s where it gets genuinely uncomfortable.
The leaked documents don’t just announce a powerful new model. They warn about it. Anthropic wrote that Mythos is “currently far ahead of any other AI model in cyber capabilities” and that it “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”
That means the model can find and exploit software vulnerabilities faster than security teams can patch them. Not slightly faster. Far ahead. Those are Anthropic’s own words about their own model.
This connects to Anthropic’s internal safety framework, which assigns models AI Safety Levels from ASL-1 through ASL-4. ASL-4, which Anthropic hasn’t formally defined yet, is expected to apply when an AI model becomes “the primary source of national security risk in a major area such as cyberattacks or biological weapons.” Based on the language in the leaked drafts, Mythos may be approaching that line.
Because of all this, Anthropic is doing something unusual: a deliberately slow, security-first rollout. Early access is going to cyber defense organizations only, giving them time to harden their systems before the model is more widely available. There’s no general release date. The documents also note the model is “very expensive for us to serve, and will be very expensive for our customers to use,” and that Anthropic is working to make it more efficient before any broader launch.
Claude Mythos and the Pentagon Fight
The timing here is something.
Just one day before the Mythos leak broke, a federal judge issued a preliminary injunction blocking the Pentagon from labeling Anthropic a “supply chain risk,” calling the government’s treatment of the company “Orwellian” and “classic First Amendment retaliation.” Anthropic won that round.
Then their own CMS leaked their scariest model to the entire internet.
Emil Michael, the Under Secretary of Defense and Anthropic’s chief antagonist in the Pentagon dispute, posted on X almost immediately, writing “Umm…hello? Is it not clear yet that we have a problem here?” Michael has spent weeks publicly calling Dario Amodei a “liar” with a “god complex” who wants to “personally control the US military.” He’s now using the leak as evidence that Anthropic can’t be trusted with sensitive AI development. It’s not a good faith argument. The Pentagon’s fight with Anthropic has always been about wanting more access to Claude, not less. But the leak handed him fresh ammunition and he’s not putting it down.
Claude Mythos vs OpenAI Spud: The Race
Mythos doesn’t exist in a vacuum. According to The Information, OpenAI finished pretraining its own next generation model, internally codenamed Spud, as of March 25. OpenAI CEO Sam Altman reportedly described it internally as a model that could “really accelerate the economy.” He also credited killing the Sora video app with freeing up the compute needed to push Spud forward.
Two frontier AI labs, both preparing for IPOs later in 2026, both sitting on their most powerful models ever, racing to see who blinks first on release timing. That’s the actual race happening right now.
For Anthropic, Mythos arriving accidentally and publicly in the same week as the Pentagon ruling, while an IPO is reportedly being discussed for Q4 2026, is either catastrophically bad timing or, as Gizmodo put it, a pretty good ad for Anthropic. Probably both.
Claude Mythos and Project Glasswing: What Happened Next
UPDATE (April 7, 2026): Anthropic confirmed that Claude Mythos will NOT be publicly released. Not delayed. Not waitlisted. Not released.
Instead, the model is being deployed exclusively through something called Project Glasswing, a defensive cybersecurity partnership with over 40 companies including Microsoft, Amazon, Apple, Google, NVIDIA, CrowdStrike, and Palo Alto Networks. The model is restricted to defensive security purposes only. No general API access. No consumer product. No developer preview.
The reason is the same cybersecurity capability that made the leaked documents so alarming. Mythos has already discovered thousands of previously unknown zero-day vulnerabilities across major systems. Anthropic’s position is that a model capable of finding vulnerabilities that fast could also be used to exploit them, and the risk of a public release outweighs the commercial upside.
This is the first time a major AI lab has built a frontier model and decided it’s too dangerous to sell. Whether you read that as responsible safety leadership or extremely effective marketing for how powerful the model is depends on your level of cynicism. Probably a bit of both.
The practical implication for developers and regular users: Mythos is not coming to Claude Pro, Claude Max, or the API. The model you use today (Sonnet 4.6 or Opus 4.6) remains the best publicly available Claude. Project Glasswing is a separate track that exists entirely in the enterprise cybersecurity world.
For Anthropic, this creates an interesting positioning problem. They’ve now confirmed they have the most powerful AI model in cybersecurity and simultaneously told the world they can’t have it. That’s either the most disciplined safety decision in AI history or the most expensive PR campaign ever run. Time will tell which.
What Claude Mythos Means for Regular People
Honestly, Mythos is not something most people will touch anytime soon. It’s expensive to run, invite-only, and aimed at cybersecurity professionals first. If you’re using Claude Pro today for writing, research, or coding, nothing changes. The tools built around Claude, like OpenClaw for autonomous task execution and Claude Cowork for desktop automation, will continue running on Sonnet and Opus regardless of what happens with Mythos.
What does change is the picture of where AI is going. When the company building the model feels the need to warn the world about it before it’s even announced, that’s worth paying attention to. Anthropic has always positioned itself as the safety-focused lab that moves carefully. The Mythos documents suggest they’re still doing that. They also suggest the models being built right now are operating in territory that even their creators find genuinely unsettling.
The model exists. Training is done. It’s already in the hands of a small group of testers.
It just wasn’t supposed to be in yours yet.
Claude Mythos FAQ: Release, Safety and Project Glasswing
Claude Mythos is Anthropic’s most powerful AI model, sitting in an entirely new tier above Opus. It was revealed through an accidental leak of nearly 3,000 internal documents from a misconfigured CMS on March 26, 2026. Anthropic confirmed the model is real and described it as a step change in capability with dramatically higher scores in coding, academic reasoning, and cybersecurity.
No. Anthropic confirmed on April 7, 2026 that Claude Mythos will not be publicly released. It is being deployed exclusively through Project Glasswing, a defensive cybersecurity partnership with over 40 companies including Microsoft, Amazon, Apple, Google, and NVIDIA. There is no general API access, no consumer product, and no developer preview planned.
Anthropic’s own leaked documents describe Mythos as currently far ahead of any other AI model in cyber capabilities, warning that it presages models that can exploit vulnerabilities faster than defenders can patch them. The model has already discovered thousands of previously unknown zero-day vulnerabilities. Anthropic restricted it to defensive cybersecurity use only through Project Glasswing because of the risk that public access could enable offensive exploitation.
Project Glasswing is Anthropic’s limited partnership program for deploying Claude Mythos exclusively for defensive cybersecurity purposes. It includes over 40 partner companies such as Microsoft, Amazon, Apple, Google, NVIDIA, CrowdStrike, and Palo Alto Networks. The model is restricted to finding and fixing vulnerabilities, not available for general use or public API access.
