A federal judge granted Anthropic a preliminary injunction blocking the Pentagon's supplier blacklist, finding the Defense Department likely violated the First Amendment when it banned the AI company for its "hostile manner through the press." Judge Rita Lin ruled that punishing Anthropic for publicly criticizing government contracting positions constitutes "classic illegal First Amendment retaliation," with the injunction taking effect in seven days.
This case crystallizes a fundamental tension in AI governance: who gets to set the boundaries on military AI use? Defense Secretary Pete Hegseth's January memo demanded "any lawful use" language in AI contracts within 180 days, essentially requiring vendors to allow military commanders full discretion over AI deployment. Anthropic refused, maintaining that Claude shouldn't be used for autonomous weapons or domestic surveillance. The Pentagon's response—blacklisting them as a "supply chain risk"—reveals how quickly contract disputes can escalate into constitutional issues.
The judge's comments during hearings show she grasps the core dilemma: "Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance... the Department of War is saying that military commanders have to decide what is safe." While she acknowledged it's not her role to resolve this policy debate, her injunction suggests the government overstepped when it moved from choosing different vendors to retaliating against public criticism.
For AI builders, this sets important precedent. Companies can apparently draw ethical lines around their technology without facing government retaliation for speaking publicly about those decisions. But the underlying question remains: as AI capabilities expand, will vendors be able to maintain use restrictions, or will government contracts increasingly demand unrestricted access? The final verdict could reshape how AI companies navigate the tension between commercial opportunities and ethical boundaries.
