Amazon announced on April 28 that AWS's Bedrock service is now offering OpenAI's latest models, the Codex code-writing service, and a new product called Bedrock Managed Agents — designed specifically to use OpenAI's reasoning models, with agent steering and security features built in. The announcement came less than 24 hours after OpenAI publicly ended Microsoft's exclusive distribution rights to its products. The two announcements are not coincidental: the up-to-$50-billion AWS-OpenAI deal had been blocked by Microsoft's exclusivity clause, and OpenAI just removed the block. Andy Jassy's "very interesting announcement" tweet on the Microsoft news yesterday was the ground-truth signal that AWS was moving immediately.

Bedrock Managed Agents is the more interesting piece. Bedrock has been Amazon's AI-app-building and model-choosing service for two years, but it has lacked first-class agent support tuned for OpenAI's reasoning models specifically. The new product positions OpenAI's o-series as the default reasoning engine for AWS-native agentic applications. For developers, this is the first time you can build an OpenAI-reasoning-powered agent on AWS infrastructure without going through Azure or self-hosted vLLM stacks. Codex availability matters too — it pulls OpenAI's coding model into the same surface as the rest of Bedrock's offerings (Claude, Llama, Titan, Mistral). On Bedrock, you can now switch between OpenAI Codex and Anthropic Claude with one API call.

The strategic restructuring is the bigger story. The Microsoft–OpenAI relationship has been fraying for months: OpenAI signed the up-to-$50B AWS deal, an Oracle deal, and is now distributing on AWS Bedrock. Microsoft is reportedly building a new agent offering powered by Claude rather than GPT, and has been investing in Anthropic and other model providers. The exclusivity that defined the Microsoft–OpenAI relationship from 2019 to 2025 is over. What replaces it is multi-cloud, multi-vendor distribution that looks more like the 2010s cloud market — three hyperscalers competing on price and feature parity rather than exclusive partnerships. The frontier-AI distribution layer just decoupled from the model layer.

For builders, three concrete consequences. First, if you have been delaying multi-region or multi-cloud deployment of OpenAI-powered features because of Azure lock-in, the lock-in is gone — AWS-native deployment is an option now. Second, the "switch between Claude and GPT in one API call" pattern that Bedrock now supports is the right benchmarking environment: your application can A/B test reasoning models from different vendors without changing infrastructure. Set that up before committing to a model. Third, Bedrock Managed Agents is a competitive response to Google's Agents CLI and Anthropic's MCP push — every cloud is now offering a managed-agent layer. Convergence at that layer means differentiation moves down to specifics: agent observability, security boundaries, eval pipelines. Pick your cloud's agent layer based on the failure-mode tooling, not the marketing.