Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei an ultimatum in February: strip Claude's guardrails against autonomous weapons and mass surveillance, or face consequences. When Amodei refused, citing concerns about undermining "democratic values," the US government moved fast—Trump ordered federal agencies to stop using Anthropic's technology, the Pentagon labeled it a supply chain risk alongside companies like Huawei, and a $200 million defense contract vanished.

Britain's Department for Science, Innovation and Technology saw something different in Anthropic's principled stance. UK officials are now courting the $380 billion company with proposals for a dual London Stock Exchange listing and expanded UK operations, backed by Prime Minister Keir Starmer's office. With 200 employees already in Britain and former PM Rishi Sunak as a senior adviser, Anthropic has existing infrastructure to build on.

The legal fight continues—US District Judge Rita Lin granted an injunction blocking the Pentagon's blacklist, calling the government's actions "troubling" and likely illegal. The Ninth Circuit appeal remains pending. This judicial pushback validates Anthropic's legal position and gives international governments cover to court the company.

For AI builders, this matters practically. If ethics-first companies can find refuge in friendlier jurisdictions, it changes the calculation around building responsible AI systems. The UK is betting that in a multipolar AI world, being the jurisdiction that welcomes principled companies—rather than demanding military compliance—becomes a competitive advantage for attracting top AI talent and investment.