Florida Attorney General James Uthmeier launched an investigation into OpenAI Thursday, alleging the company poses national security and public safety risks. Uthmeier claims OpenAI's data and technology are "falling into the hands of America's enemies, such as the Chinese Communist Party" and links ChatGPT to criminal behavior including child sexual abuse material and self-harm encouragement. The probe specifically references the April 2025 Florida State University shooting, where the suspect allegedly maintained "constant communication with ChatGPT" according to a family lawsuit filed this week.

This marks the second major regulatory challenge for OpenAI as it prepares for its IPO this year. The Federal Trade Commission already opened a separate investigation into whether OpenAI violated consumer protection laws by causing "reputational harm" through false or misleading statements about real individuals. The FTC's 20-page records demand specifically probes how OpenAI addresses its models' tendency to generate defamatory content about actual people.

While Florida focuses on security threats and criminal misuse, the FTC investigation reveals deeper concerns about AI accuracy and harm to individuals — issues that directly impact OpenAI's liability exposure and business model. The timing is particularly problematic as OpenAI simultaneously expands its U.S. data center footprint through the Trump-backed Stargate initiative, seeking locations across 16 states for what could become a $500 billion infrastructure buildout.

For developers integrating OpenAI's APIs, these investigations signal potential compliance requirements ahead. Expect new content filtering mandates, enhanced logging requirements, and possible restrictions on certain use cases — particularly anything involving minors or sensitive personal information.