The New Yorker's massive investigation into Sam Altman paints a damning picture of OpenAI's leadership just as the company released policy recommendations for "keeping people first" in the age of superintelligence. After interviewing over 100 people familiar with Altman's business practices and reviewing internal memos, reporters found a pattern of alleged deceptions that led former chief scientist Ilya Sutskever and former research head Dario Amodei to conclude Altman wasn't fostering a safe environment for advanced AI development. "The problem with OpenAI is Sam himself," Amodei wrote in internal messages.
This investigation lands at a critical moment when governments increasingly rely on OpenAI's models and lawsuits challenge the safety of its technology. One board member described Altman as having "two traits almost never seen in the same person" â an intense desire to please people combined with "sociopathic lack of concern for consequences from deceiving someone." The timing is particularly striking: OpenAI simultaneously warns about AI systems potentially evading human control while its own leadership faces questions about trustworthiness and transparency.
While The New Yorker found no smoking gun, the accumulation of incidents from multiple credible sources creates a troubling pattern. Altman disputed some claims and attributed others to his conflict-avoidant nature, but his shifting narratives are becoming harder to ignore as OpenAI's influence grows. For developers and companies building on OpenAI's infrastructure, this raises uncomfortable questions about whether the company leading the AI race can be trusted to prioritize safety over growth when those interests inevitably conflict.
