Seven lawsuits filed in California court on April 29 allege OpenAI could have prevented one of the deadliest mass shootings in Canada's history โ a school shooting in Tumbler Ridge, a 2,000-person rural mining town in British Columbia. According to the lawsuits and WSJ whistleblower reporting cited by Ars Technica, OpenAI's internal safety team flagged the shooter's ChatGPT account more than eight months before the attack as posing "a credible threat of gun violence in the real world." OpenAI leadership overruled the safety team's recommendation to notify police, reasoning that "the user's privacy and the potential stress of an encounter with cops outweighed the risks of violence." The shooter's account was deactivated. OpenAI then, per the lawsuits, told the shooter how to come back on ChatGPT by signing up with another email address. Local police separately had a file on the shooter and had previously removed guns from the home. The lawsuits are led by attorney Jay Edelson and represent six families of victims killed plus one whose daughter remains in ICU. Sam Altman issued a public apology last week.
The procedural decision โ overruling the safety team โ is the part that matters operationally. Most large AI companies have some version of an internal trust-and-safety pipeline that flags abusive or threat-bearing prompts; the question is what gets done with those flags. OpenAI is allegedly notifying police on credible-threat flags as a matter of expected policy, but here the safety team's escalation was overridden at a higher level on user-privacy grounds with no countervailing process to ensure the decision held up. The "told the user how to make a new account" detail is the most damning โ it implies the deactivation was treated as a customer-experience event, not a threat-management event, and the customer-support workflow continued running normally despite the underlying flag. Process design matters: a flag that disables one account without disabling user attribution across email, IP, payment, and behavioral signature is not a safety control. It is a UX speedbump.
The lawsuit set is filed in California strategically. Edelson told Ars the goal is to get OpenAI before "a jury of their peers" on home turf, and that the California filings are designed to supersede a Canadian suit where OpenAI was expected to contest jurisdiction โ part of what Edelson characterized as a strategy to delay litigation over ChatGPT-linked deaths until after OpenAI's planned IPO this year. The IPO timing matters because plaintiffs' attorneys typically structure cases to maximize disclosure pressure on companies in the run-up to public offerings. Whatever the merits of the individual claims, OpenAI is going to face discovery on internal safety-team flags, the volume of credible-threat escalations leadership has overridden, and the customer-support workflow that continues to engage flagged accounts. Combined with the Musk v. Altman trial (covered earlier this session) and the OpenAI-Pentagon contracts Anthropic refused (also covered), OpenAI's pre-IPO legal-disclosure surface is unusually broad.
For builders, three concrete things. First, if you ship any AI product with a trust-and-safety function, document your escalation pipeline as code, with each level of override producing an auditable record. Verbal overrides by leadership do not survive depositions; ticketed overrides with stated rationale do. Second, account deactivation is not a safety control unless it is tied to user-level attribution that survives email re-registration. Tie threat flags to behavioral signature, payment instrument, IP, and any other identifiers your TOS allows you to retain โ otherwise the deactivation is theater. Third, the harder pattern is the one Anthropic is testing: refuse the contracts and use cases where you cannot enforce safety controls, and accept the commercial cost. OpenAI is being sued for over-prioritizing user privacy over safety; the open question for the rest of the industry is whether under-prioritizing user privacy in service of safety, or refusing the use case entirely, is the policy that survives the next round of litigation.
