Federal Judge Rita Lin granted Anthropic a preliminary injunction Thursday, blocking the Pentagon from enforcing its "supply chain risk" designation against the AI company. The ruling halts Trump administration efforts to cut off government use of Claude across federal agencies, with Lin writing that the Defense Department "provides no legitimate basis to infer from Anthropic's forthright insistence on usage restrictions that it might become a saboteur."
This marks a significant clash over AI governance in the federal government. The Pentagon had been using Claude for writing sensitive documents and analyzing classified data for two years before abruptly pulling the plug this month. The administration cited Anthropic's usage restrictions as evidence the company "could not be trusted" — a remarkable position given that responsible AI practices typically require guardrails. The designation was effectively blacklisting Anthropic from government contracts and damaging its reputation with enterprise customers.
The ruling exposes deeper tensions about AI deployment in sensitive environments. While Anthropic can claim victory, the injunction only restores the status quo to February 27 and doesn't prevent agencies from switching providers through normal procurement processes. The Pentagon can still cancel Claude contracts — they just can't cite the supply chain risk label as justification. A separate lawsuit in DC appeals court remains pending under different legal theories.
For developers and AI teams working with government clients, this creates continued uncertainty. The week delay before Lin's order takes effect means Claude remains restricted in the near term. More broadly, this case signals that AI companies may face political pressure to remove safety guardrails when working with government agencies — a troubling precedent as AI becomes more integrated into critical infrastructure.
