This morning, Anthropic — the company behind the Claude AI chatbot — sued the Trump administration in two federal courts. The target: a Pentagon decision that labeled the company a “supply chain risk to national security.” It’s a designation that has never before been applied to an American company. Until now.

Two Red Lines, One Blacklist

The conflict came down to two specific limits Anthropic refused to remove: Claude could not be used for mass domestic surveillance of Americans, and it could not be used for fully autonomous weapons systems. CEO Dario Amodei argued publicly that current AI models aren’t reliable enough for fully autonomous lethal decisions — making that use case genuinely dangerous, not just ethically uncomfortable.

The Pentagon’s counter was blunt: the military needed Claude for “all lawful purposes,” and a private company shouldn’t get to veto that. Amodei met personally with Defense Secretary Pete Hegseth on February 24. They couldn’t close the gap.

When talks collapsed, the supply chain risk designation landed fast — cutting Anthropic off from its $200 million DOD contract and barring defense contractors from using Claude in government work. Trump also ordered all federal agencies to stop using Claude, though the Pentagon got a six-month window to phase it out, largely because Claude is too embedded in classified systems to cut off overnight.

The Lawsuit — and the Fallout

Anthropic’s filings, submitted in California and the DC federal appeals court, center on a First Amendment argument: the government can choose not to work with a company, but it cannot punish one for stating its views. The legal road is steep — courts have historically deferred to the executive branch on national security — but the implications stretch far beyond Anthropic. If this designation holds, it shifts the leverage every AI company has when negotiating limits on how its technology gets used.

OpenAI moved quickly to fill the military gap, announcing a Pentagon deal within hours of the blacklisting. On the consumer side, the public responded differently: more than a million people signed up for Claude daily this week, pushing it past ChatGPT and Gemini as the top AI app in over 20 countries in Apple’s App Store. Apparently drawing a line on weapons technology is good for brand loyalty.

Who Decides What AI Can Do

Amazon, Microsoft, and Google have each confirmed Claude remains available through their platforms for non-defense work — close enough to keep the business, far enough to avoid the legal blast radius. The question of who gets to set limits on AI just landed in federal court. The answer will set the terms for everyone. will set the terms for everyone.

Skip to content