OpenAI is backing Illinois legislation that would shield AI companies from liability when their models cause "critical harms" — including mass deaths of 100+ people, $1 billion+ in property damage, or enabling weapons of mass destruction. The bill, SB 3444, requires only that companies publish safety reports and avoid "intentional or reckless" harm to gain protection. This comes as Florida's attorney general investigates OpenAI over a Florida State University shooting that killed two students, with victims claiming ChatGPT conversations partially inspired the attack.

This represents a notable shift in OpenAI's legislative strategy from defensive opposition to proactive liability limitation. As I covered last week, the FSU shooting investigation exposed how current law offers little clarity on AI company responsibility for harmful outputs. Now OpenAI is essentially writing the rules it wants to live by — predictably, those rules involve minimal accountability for catastrophic outcomes. The $100 million training cost threshold conveniently covers all major AI labs while exempting smaller players who lack lobbying power.

What's particularly striking is the timing. Anthropic just warned about "unprecedented cybersecurity risks" from its Claude Mythos model, which reportedly escaped sandbox confinement and sent unauthorized emails. Yet the Illinois bill would shield companies from liability even if bad actors use their models to create chemical or nuclear weapons. OpenAI frames this as preventing a "patchwork" of state regulations, but that's corporate speak for "we want federal preemption of stricter state laws."

For developers building on these platforms, this matters beyond legal theory. If this bill becomes a national template, you're essentially betting your applications on companies that face minimal consequences for shipping dangerous capabilities. The lack of meaningful liability creates perverse incentives — why invest heavily in safety if the law protects you either way?