A federal judge delivered a sharp rebuke to the Trump administration Thursday, blocking the Pentagon's "supply chain risk" designation against Anthropic and halting President Trump's order for all federal agencies to stop using the company's Claude AI model. U.S. District Judge Rita Lin called the government's actions "Orwellian" and said they could "cripple" the company, ruling that Anthropic had shown the measures were "likely unlawful" and causing "irreparable harm."
This ruling represents a significant victory for AI companies pushing back against government overreach â and highlights the growing tension between national security demands and Silicon Valley's guardrails. The dispute centers on Anthropic's refusal to let the military use Claude for domestic surveillance or fully autonomous weapons, while the Pentagon insists it needs AI tools for "all lawful purposes." As I've covered previously, this case has become a bellwether for how the Trump administration will handle AI companies that won't bend to military demands.
Judge Lin appeared particularly skeptical during Tuesday's hearing, noting the government's actions "don't really seem to be tailored to the stated national security concern." She pointed out that if the Pentagon was worried about operational integrity, it could simply stop using Claude rather than attempting to blacklist Anthropic across all government contracts. The judge's language was unusually pointed â calling the moves "troubling" and questioning whether they were truly about security or punishment for protected speech.
For developers and AI companies, this sets important precedent about resisting government pressure to weaken safety guardrails. Anthropic's willingness to fight â and win â in federal court shows that principled stands on AI ethics can survive legal challenges, even against national security arguments.
