OpenAI and Microsoft closed the recapitalization today, formally ending the Azure-exclusive era for OpenAI products and locking in the financial terms of the new arrangement. Microsoft now holds approximately 27% of OpenAI Group PBC on an as-converted, fully-diluted basis, valued at roughly $135B. OpenAI committed to an incremental $250B of Azure services on top of existing commitments, but Microsoft loses the right of first refusal to be OpenAI's compute provider; non-API products can now ship on any cloud, with API products developed jointly with third parties remaining Azure-exclusive. Microsoft's IP license to OpenAI models and products extends through 2032 but is now non-exclusive, and explicitly includes post-AGI models with safety guardrails. Revenue share from OpenAI to Microsoft continues through 2030 subject to a total cap. AGI declaration moves out of OpenAI's board's hands: an independent expert panel must verify any future AGI claim before contractual triggers fire. The framework was announced in October 2025; today is the formal close.

The technical reality behind the headline is that the Microsoft-as-bottleneck era ended in practice months ago and is only now ending on paper. OpenAI signed a $38B AWS compute deal in late 2025, has been quietly serving inference on Oracle Cloud and CoreWeave for over a year, and the Stargate buildout was always going to outrun what Azure could deliver in a single-cloud architecture. The April 27 closing is the legal alignment with what the workload distribution already looked like. The genuinely new piece is the equity number. $135B at 27% implies an OpenAI Group valuation around $500B, consistent with recent secondary-market pricing. Microsoft's stake is now the single largest equity position in any frontier AI lab, and combined with the $250B Azure commit, Microsoft is locked into OpenAI revenue and OpenAI infrastructure spending for at least seven years. The framing of this as Microsoft "losing exclusivity" is half the story; Microsoft also extracted a $250B compute floor and capped its revenue-share outflow, which on net is a better commercial position than the original 2019 agreement.

The broader implication, taken with this week's Google-Anthropic $40B and the parallel AWS-Anthropic deal, is that the multi-cloud frontier AI era is now formally locked in. Three of the top frontier labs (OpenAI, Anthropic, and increasingly DeepMind via Google's TPU integration) are anchored to two or more hyperscalers each, with $250B-class compute commitments treated as routine. The infrastructure side of frontier AI now looks structurally like a duopsony with three suppliers (Nvidia, Google TPU, AWS Trainium) and three or four customers at scale, with the customers' bargaining power increasing as exclusivity terms unwind. The AGI panel clause is the more interesting governance precedent: it removes the unilateral right to declare AGI from a single private board and makes it subject to an independent expert process. Whether the panel composition is meaningful or theatrical depends on selection criteria not yet public, but the principle of removing self-judgment from a contractual trigger this large is novel and worth watching as a template.

For developers, three concrete things change. One: OpenAI APIs will become available on AWS Bedrock and other cloud control planes within months, removing the Azure-routing requirement for enterprise customers who already have non-Azure cloud commitments. Two: the open-weight allowance buried in the new agreement matters; OpenAI is now contractually permitted to release open-weight models that meet capability criteria, which is the first time since GPT-2 that OpenAI has had explicit contractual cover to ship weights, even if they still choose not to. Three: pricing competition between Azure-OpenAI, AWS-OpenAI, and direct OpenAI API will narrow the cloud-vendor markup that has been baked into Azure-OpenAI pricing for the last three years; build your stack assuming OpenAI inference becomes a commodity sold by multiple resellers within the next 12 months. The end state of all this restructuring is that the frontier-model layer and the compute-provider layer are now formally decoupled: you do not have to commit to a cloud to use a model, and you do not have to commit to a model to use a cloud. That is healthier for the ecosystem than what came before.