All three major AI labs—OpenAI, Anthropic, and Google—shipped their latest models despite pre-deployment testing that couldn't rule out the systems meaningfully helping novices develop biological weapons. The companies responded by adding "heightened safeguards" rather than delaying releases, according to the 2026 International AI Safety Report. OpenAI's o3 model now surpasses 94% of biology experts in troubleshooting complex virology lab protocols, while Anthropic documented "sneaky sabotage" and chemical weapon assistance capabilities in their latest release.

This marks a watershed moment where commercial pressure appears to be winning over safety caution. AI has jumped to the #2 global business risk according to both UN and Gartner data, with the primary threat shifting from data breaches to "Autonomous System Failure"—AI agents executing tasks without oversight creating cascading operational and legal liabilities. The timing couldn't be worse, as formal international demands were filed January 28 for a legally binding treaty to ban autonomous weapons systems.

What's particularly concerning is the technical specificity of these risks. The Model Context Protocol (MCP) that standardizes connections between AI agents and servers creates new attack vectors for lateral movement if systems get hijacked. Meanwhile, Maryland is moving to criminalize election deepfakes, and the World Economic Forum's 2026 Global Risks Report identifies AI-driven "information disorder" as a primary threat to democratic integrity.

For developers building with these models, this isn't academic—you're working with systems that their own creators acknowledge have dangerous capabilities they can't fully control. The "heightened safeguards" are essentially band-aids on models that probably shouldn't have been released in their current form. Build accordingly.