The New Yorker published a 16,000-word joint profile by Ronan Farrow and Andrew Marantz on April 13, asking whether Sam Altman can be trusted with the future. The piece is built on 18 months of reporting, roughly a dozen interviews with Altman himself, and more than 100 sources with firsthand knowledge of how he operates. The standout quote belongs to an OpenAI board member: Altman is "unconstrained by truth," possessing "a strong desire to please people, to be liked in any given interaction" alongside "almost a sociopathic lack of concern for the consequences that may come from deceiving someone." This is the most substantive public challenge to Altman's credibility since the November 2023 board fight, and it lands with documentation rather than vibes.
Farrow and Marantz say they reviewed internal memos, more than 200 pages of documents, and spoke to insiders past and present. What OpenAI chose to do in response is worth paying attention to. Rather than contesting the piece broadly, the company sent a letter to two state attorneys general saying it trusts the story's account of the company's founding and the November 2023 Altman ouster. Elon Musk's lawyers are now citing the piece in his ongoing suit against OpenAI. That is an unusual posture: a company effectively endorsing a hostile profile in a legal filing, which strongly suggests the factual core is harder to refute than the narrative framing. You do not cite a story you think is fabricated.
For builders, this is not a tabloid moment to scroll past. It is the most detailed public record to date of the governance failures, the founding myths, and the behavioral patterns of the person running the most consequential AI infrastructure company in the world. If you build on the OpenAI API, you already know the practical risks: rate-limit volatility, quiet model deprecations, pricing moves, opaque roadmaps. What the Farrow piece adds is weight to the underlying question of how much of OpenAI's public communication to take at face value: safety commitments, release timelines, compute-capacity claims, nonprofit-mission framing. The piece does not answer that question directly. It raises the evidentiary bar for believing any single Altman statement without corroboration.
The actionable move for anyone betting on OpenAI as infrastructure is the move you should probably already have taken, which is not to single-source your model provider. Multi-provider abstraction layers, fallback pathways to Anthropic, Google, and open-weights providers, and real head-to-head testing across models on your actual workloads were already the right engineering posture. The Farrow piece just makes the governance risk concrete rather than theoretical. It is also worth watching what happens next inside OpenAI: board composition, safety-team independence, and the ratio of commercial pressure to safety discipline in the next few releases. Those will matter more for your roadmap than any individual Altman public appearance. Read the piece, budget your trust accordingly, and keep your abstraction layers honest.
