OpenAI's defense partnership announcement triggered a user exodus, with the "QuitGPT" campaign urging ChatGPT subscription cancellations over the company's military ties. Meanwhile, Anthropic—supposedly the "ethical AI" alternative—is already powering US military operations against Iran, despite earlier public feuds with the Pentagon over weaponizing Claude. The irony is stark: users are fleeing OpenAI for crossing ethical lines while Anthropic has quietly crossed them already.
This backlash reveals how quickly AI ethics positions can become marketing theater. Anthropic built its brand on being the responsible choice, yet it's actively enabling military strikes. OpenAI faced immediate user revolt for announcing what Anthropic was already doing behind closed doors. The disconnect shows how little users actually know about their AI providers' government contracts—and how much companies benefit from that ignorance.
The protests extend beyond subscriptions. London saw its largest anti-AI demonstration to date, while the broader movement links AI concerns to immigration enforcement under Trump. These aren't just tech worker concerns anymore—they're mainstream political positions. The "opportunistic and sloppy" nature of OpenAI's Pentagon deal, as critics describe it, suggests companies are rushing into defense contracts without considering user sentiment or long-term brand damage.
For developers and AI users, this creates a trust problem with no clean solutions. Every major AI provider has government ties, but transparency varies wildly. If ethics matter to your users or business, you need clear policies about which AI providers align with your values—and accept that pure options may not exist.
