A federal appeals court in DC refused to halt the Trump administration's blacklist of Anthropic yesterday, denying the AI company's emergency motion for a stay. The three-judge panel â all Republican appointees including Trump picks Gregory Katsas and Neomi Rao â acknowledged Anthropic would "likely suffer some degree of irreparable harm" but ruled the company hadn't shown its speech was being chilled. The court expedited the case for May 19 oral arguments, leaving Anthropic without immediate relief from the Pentagon's "supply chain risk" designation that bars military contractors from using Claude.
The split court rulings expose how judge shopping and political appointments now shape AI policy enforcement. While Trump-appointed judges in DC sided with the administration, Biden appointee Rita Lin in California's Northern District already granted Anthropic a preliminary injunction in March, calling the blacklist "Orwellian" retaliation for protected speech. As I covered then, Lin found the Pentagon's ultimatum â remove Claude's guardrails against autonomous weapons or face consequences â violated the First Amendment when Anthropic refused.
What makes this fascinating is the international angle other sources reveal: the UK government is actively courting Anthropic with proposals for London Stock Exchange listing and office expansion, specifically because of the company's principled stance against weaponizing AI. Prime Minister Starmer's office backs the effort, seeing Anthropic's refusal to build autonomous weapons as exactly the kind of AI company Britain wants. It's a stark contrast â Washington punishes Anthropic for having ethical guardrails while London rewards them for it.
For developers, this split creates real uncertainty. Anthropic's federal contracts remain frozen pending appeals, but the California injunction provides some protection. The broader message is clear: your AI ethics stance now has geopolitical consequences, and where you incorporate might matter as much as what you build.
