OpenAI Just Made a Deal With the Military. Then Everyone Started Deleting ChatGPT.

There are moments in tech where a company does something and you immediately know it changed things. This was one of them.

In February 2026, OpenAI signed an agreement with the U.S. Department of Defense allowing its models to be deployed in classified military situations. No public announcement with fanfare. No big press release. Just a deal, quietly done, that landed like a grenade in the AI community.

The reaction was immediate and it was loud.

295% More ChatGPT Uninstalls In a Single Day

The day after the deal became public, ChatGPT uninstalls jumped 295% compared to the day before. Not 295 more uninstalls. 295% more. Nearly three times the normal daily churn, in one day, because people found out their AI assistant had just signed up for military contracts.

Claude, made by Anthropic, shot to the number one spot in the App Store the same day. People were not just deleting ChatGPT. They were immediately switching to something else. That something else was Claude.

The contrast between the two companies could not have been more stark. Anthropic had already drawn a hard line. Their CEO Dario Amodei went directly to the Pentagon and told Defense Secretary Pete Hegseth that Anthropic’s AI would not be used for mass surveillance of Americans and would not power autonomous weapons that can kill without a human making the final call. That was the deal. Take it or leave it.

The Pentagon, under the Trump administration, pushed back. They wanted access without the restrictions. Amodei held the line. The negotiations stalled.

Then OpenAI swooped in and said yes.

What OpenAI Actually Agreed To

OpenAI’s position is that their agreement has redlines of its own. They say the deal explicitly prohibits autonomous weapons and autonomous surveillance. On paper, that sounds similar to what Anthropic was asking for.

But the AI community was not buying it. The backlash was not just from regular users deleting the app. It came from inside the company. OpenAI hardware executive Caitlin Kalinowski resigned specifically because of the deal. Her statement was direct: the agreement was rushed without the guardrails being properly defined first.

When your own executives quit over a decision, that tells you something about how the decision was made.

Why This Actually Matters

Here is the thing most people are missing in the conversation about whether this deal is good or bad. The bigger story is what it reveals about where AI companies are headed and how they see their own role.

For years the pitch from every major AI lab has been some version of “we are building this for humanity.” OpenAI literally has that language in their founding documents. Beneficial AI for all of humanity. That was the mission.

Signing military contracts for classified deployments is a different direction. It is not inherently evil. Militaries use technology. Technology companies sell to militaries. That is not new. But the speed of it, the lack of transparency, and the internal opposition all point to a company that is making decisions under pressure rather than from principle.

Anthropic’s position is not perfect either. Holding a hard line in negotiations is easier when you have $19 billion in annualized revenue and do not need the contract. Principles are cheaper when you can afford them.

But the users responded. And they responded clearly. The 295% uninstall spike is real data about real people making a real choice. When you find out your AI assistant has a new client in the Department of Defense, some people want to know what it is doing there. That is a reasonable thing to want to know.

What You Should Do

If you use ChatGPT as your primary AI tool and this information changes how you feel about that, Claude is a direct alternative. Most things you do in ChatGPT you can do in Claude. The free tier is capable. Claude Pro at $20 a month is what I use and it handles everything from writing to research to code.

If it does not change how you feel, that is fine too. Use what works. But know what happened and make the choice consciously.

The AI tools you use every day are being made by companies with priorities that are not always the same as yours. That is worth paying attention to.