OpenAI testified in favor of Illinois bill SB 3444, which would shield AI labs from liability for "critical harms"—including deaths of 100+ people or $1+ billion in damages—caused by their frontier models. The protection applies to any AI model trained with over $100 million in compute costs, covering OpenAI, Google, Anthropic, and Meta, as long as companies didn't act intentionally or recklessly and published safety reports. The bill defines critical harms as AI-enabled creation of weapons of mass destruction or autonomous criminal conduct leading to mass casualties.

This marks a strategic shift for OpenAI from defensive opposition to proactive lobbying for liability protections. Multiple AI policy experts called SB 3444 more extreme than previous OpenAI-backed measures. The timing aligns with the Trump administration's crackdown on state AI safety laws and OpenAI's push for federal preemption of state regulations. OpenAI's Caitlin Niedermeyer argued the bill would prevent "a patchwork of inconsistent state requirements."

The legislative push coincides with OpenAI releasing a 13-page policy paper calling for sweeping economic reforms to prepare for "superintelligence," including shorter workweeks and public wealth funds. The timing raised eyebrows given The New Yorker's concurrent investigation questioning CEO Sam Altman's trustworthiness on AI safety issues. Critics see the liability shield as premature given the absence of federal AI regulation and growing concerns about advanced models like Anthropic's Claude Mythos raising novel safety challenges.

For developers, this could set precedent nationwide if other states follow Illinois. The $100 million compute threshold creates a clear dividing line between protected "frontier" labs and smaller developers who might face different liability standards. If passed, the bill could accelerate AI development by reducing legal risk for major labs while potentially leaving smaller companies more exposed to lawsuits.