A California judge temporarily blocked the Pentagon from designating Anthropic as a supply chain risk last Thursday, calling out the Defense Department for prioritizing Twitter drama over proper legal process. Judge Rita Lin's 43-page opinion dismantled the government's case, finding Defense Secretary Pete Hegseth failed to follow required procedures after Trump's Truth Social post directed all federal agencies to stop using Anthropic's Claude AI. The government admitted in court it had no evidence for its claimed "kill switch" concerns.
This mess started as a straightforward contract dispute. The government used Claude through Palantir for most of 2025 without issues, operating under terms that prohibited mass surveillance and lethal autonomous weapons. Problems only arose when the Pentagon tried to contract directly with Anthropic. Instead of working through existing dispute processes, officials chose public punishment—a move that backfired spectacularly when they couldn't back up their legal claims.
The case exposes how quickly AI policy can turn into performative politics. The government's own court filings contradict the inflammatory social media posts from Trump and Hegseth, revealing this was less about legitimate security concerns and more about making an example of a company that wouldn't bend. Lin's opinion suggests the entire supply chain designation was legally unsupported—letters to Congress claimed other remedies were impossible without providing details, and key procedural steps were simply skipped.
For AI companies, this sets a concerning precedent about government retaliation for contract disputes. But it also shows courts won't rubber-stamp poorly executed policy theater. The case remains unresolved with appeals pending, but Anthropic's legal victory demonstrates that even in heated political moments, due process still matters—at least when you can afford good lawyers.
