The Pentagon Tried to Destroy Anthropic for Having Principles. A Judge Just Said No.

A federal court just called the government’s attack on Anthropic “Orwellian.” Here’s the whole story.

Anthropic said it wouldn’t let its AI be used for autonomous weapons or to spy on American citizens.

The Pentagon responded by trying to put them out of business.

A federal judge just said that’s not how this works.

On Thursday, U.S. District Judge Rita Lin issued a preliminary injunction blocking the Pentagon from labeling Anthropic a “supply chain risk to national security.” The designation, which Defense Secretary Pete Hegseth applied last month, had previously been reserved for companies tied to foreign adversaries. Think Chinese military-linked firms, not American AI startups. Judge Lin’s 43-page ruling was not gentle about it. She called the move arbitrary, capricious, and almost certainly unconstitutional. Then she used the word “Orwellian.”

That’s a federal judge. In a written ruling. Describing the U.S. government’s treatment of an American AI company.

Here’s How It Started

Anthropic won a contract to provide the Pentagon with access to Claude. Then the Department of Defense asked Anthropic to modify the terms to allow “all lawful uses” of its technology, including, presumably, uses Anthropic had explicitly said it wouldn’t support.

Anthropic had two hard limits: no autonomous weapons systems, no mass domestic surveillance. It wasn’t hiding the ball. These were publicly stated positions the Pentagon had already reviewed and signed off on when vetting the company for the contract. Anthropic argued those limits aren’t arbitrary restrictions. They’re the company’s stated values, and expressing them is protected speech.

The Pentagon didn’t see it that way. Hegseth labeled the company a supply chain risk. Trump followed up by ordering all federal agencies to stop using Claude. And then things escalated further: the designation meant any company that does business with the military also had to cut ties with Anthropic. Not just the government. Every contractor, too. That’s not a policy disagreement. That’s a blacklist.

What the Judge Said

Judge Lin didn’t mince it. In her ruling, she noted that the Pentagon’s own internal records showed the supply chain risk designation wasn’t based on national security concerns. It was based on Anthropic’s “hostile manner through the press.” That’s the document trail. They labeled an American company a potential foreign-adversary-level threat because the CEO talked to journalists.

She wrote that the measures were “designed to punish Anthropic” and called the whole thing “classic First Amendment retaliation.” She also pointed out the obvious: if the government didn’t want to use Claude, it could just stop using Claude. That’s completely legal. What it can’t do is weaponize a national security designation to destroy a company’s ability to operate commercially, across the entire defense contractor ecosystem, because it didn’t like what the company said in public.

The injunction takes effect in seven days, giving the government a window to appeal. A separate but related case is still moving through federal court in Washington D.C. Microsoft, the ACLU, and a group of retired military leaders all filed briefs supporting Anthropic’s position.

Why This Matters Beyond the Headlines

This isn’t just an Anthropic story. It’s a precedent story.

Every AI company operating in this environment is watching what happens when you draw an ethical line and refuse to move it. The government’s position, stripped down, was this: we bought access to your tool, so we get to decide how it’s used, including for things you’ve explicitly said you won’t support. Anthropic’s position was that’s not how values work.

The judge sided with the company. For now.

What it signals to the rest of the industry is worth paying attention to. AI safety isn’t just a PR strategy or a regulatory checkbox. When it’s built into the model itself, when the company is actually willing to lose a contract over it, it becomes a legal position. One a federal judge found credible enough to protect. We reviewed Claude Pro recently and that safety first philosophy shows up in how the product actually works, not just how Anthropic talks about it.

Days after this ruling, Anthropic’s security problems made headlines for a different reason when internal documents leaked revealing Claude Mythos, their most powerful model yet, with cybersecurity capabilities that even they described as unprecedented. The Pentagon’s Emil Michael jumped on that leak immediately as evidence that Anthropic can’t be trusted.

Anthropic said it’s “grateful to the court for moving swiftly” and that its focus remains on working productively with the government. Diplomatic, given the circumstances.

The Pentagon hasn’t commented. Hegseth is also currently dealing with a separate ruling where a judge found he violated the First Amendment rights of journalists by restricting press access. It’s been a rough month for that particular inbox.

The case continues. But for now, Anthropic drew a line, held it, and a federal court backed them up. Meanwhile OpenAI is busy killing products and pivoting strategy every quarter. Anthropic is busy getting sued by the Pentagon for having too many principles. Different energy.

That’s not something you see every day.