Senate Democrats Adam Schiff (D-CA) and Elissa Slotkin (D-MI) are working on separate bills to legally enforce the AI safety restrictions that got Anthropic blacklisted by the Trump administration. Schiff is drafting legislation to "codify" Anthropic's red lines requiring human decision-making in lethal force scenarios, while Slotkin's AI Guardrails Act would prohibit the Defense Department from using AI for autonomous weapons or domestic mass surveillance. The moves come after Trump designated Anthropic a "supply-chain risk" for refusing to let the Pentagon use its models for fully autonomous weapons — the same restrictions that competitor OpenAI apparently abandoned when signing its own military deal.
This congressional intervention reveals how Anthropic's principled stand has become a proxy battle for the entire industry's relationship with defense applications. While OpenAI quietly pivoted from its initial military restrictions, Anthropic doubled down and sued the government for violating its constitutional rights. The company's willingness to sacrifice Pentagon contracts for ethical boundaries puts pressure on other AI labs to clarify their own military red lines — or risk looking complicit in autonomous killing systems.
What's striking is how quickly this corporate policy dispute escalated into federal legislation. Schiff's explicit support for Anthropic as "one of the preeminent leaders of AI" suggests Democrats see the company's resistance as strategically valuable for maintaining American AI leadership while preventing a race-to-the-bottom on safety standards. The fact that two separate Senate offices are drafting similar bills indicates broader Democratic concern about unchecked military AI deployment under the Trump administration.
For AI developers, this fight matters beyond defense applications. If Congress passes human-in-the-loop requirements for military AI, similar mandates could follow for other high-stakes domains like healthcare, finance, or autonomous vehicles. The precedent of legally enforcing corporate AI ethics policies could reshape how the entire industry approaches safety guardrails.
