TechCrunch reported on April 28 that Google has signed a Pentagon agreement granting access to its AI on classified networks for "all lawful uses" โ the same terms Anthropic refused in February and was punished for. Google joins OpenAI and xAI as the third major AI vendor to take a Pentagon contract Anthropic walked away from. Per the Wall Street Journal, Google's contract includes language saying its AI is not intended for domestic mass surveillance or autonomous weapons, but it is unclear whether that language is legally binding or enforceable. 950 Google employees have now signed an open letter asking the company to follow Anthropic's lead โ up from 560 names 24 hours earlier, before the Google deal was reported.
Anthropic's path through this has been bruising and is worth understanding. After Defense Secretary Pete Hegseth summoned CEO Dario Amodei to the Pentagon and demanded the company drop its no-mass-surveillance and no-autonomous-weapons clauses, Anthropic refused. The Trump administration on February 27 ordered federal agencies and military contractors to halt Anthropic business; the DoD then declared Anthropic a "supply-chain risk," a designation normally reserved for foreign adversaries. Anthropic sued in March. US District Judge Rita Lin granted a preliminary injunction in a 43-page ruling that called the designation "classic illegal First Amendment retaliation" and rejected what she called the "Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government." On April 8, an appeals court rejected Anthropic's broader appeal of the supply-chain designation โ so Anthropic has the injunction but lost ground at the appellate level.
What's left is a three-tier landscape. Anthropic refused the "any lawful use" clause and is the only major lab whose red lines have actually been tested against Pentagon enforcement. OpenAI signed with explicit contract red lines โ no mass surveillance, no autonomous weapons โ and MIT Technology Review's read of that deal was "this is what Anthropic feared," because the red lines are contractual but the deal happened. xAI accepted what Defense reporting describes as "few restrictions on military use of AI." Google's contract sits closer to OpenAI's: language about non-surveillance and non-autonomous-weapons, but per the WSJ, the binding force of that language is unclear. The pattern: every major frontier AI vendor now has a Pentagon relationship; the difference is how durable the red lines are when the Defense Secretary personally demands they be dropped.
For builders, this matters in three concrete ways. First, the Lin ruling โ that branding a private AI company a "supply-chain risk" for refusing contract terms is "First Amendment retaliation" โ is now federal-court precedent, even after the April 8 appeals-court setback on the broader designation. That language will get cited in future fights over how the U.S. government treats vendors who refuse contract terms. Second, model choice now has a political dimension that is not abstract: if you build on a frontier lab's API, your usage is fungible with Pentagon use of that lab's models, which matters or doesn't matter depending on your stack and your customers. Third, the open-letter dynamic at Google (560 โ 950 names in 24 hours) is a signal that internal pressure is rising, not falling; expect more public refusals from individual researchers even as their companies sign. Anthropic is the only major lab where the public refusal is the company's, not just the workers'.
