At GTC 2026's packed "Open Model Super Panel," Jensen Huang steered the conversation away from the tired open-versus-closed model debate toward something more consequential: AI as orchestrated systems. Alongside Perplexity's Aravind Srinivas, LangChain's Harrison Chase, and others, Huang argued that "proprietary versus open is not a thing. It's proprietary and open" — positioning AI as a stack where different models serve different functions within larger workflows.
This represents a fundamental shift in how the industry thinks about AI infrastructure. While everyone's been fixated on which foundation model will "win," the real value is moving up the stack to orchestration, memory, tools, and runtime systems. Srinivas crystallized this with Perplexity Computer's approach: instead of forcing users to choose models and stitch workflows manually, the system decides which models to call and when. That's not just product positioning — it's recognizing that the next abstraction layer isn't another chatbot, but a delegation computer that knows when GPT-4 is overkill and when a local model suffices.
What's telling is how this aligns with infrastructure reality. While Chinese sources track GPU hierarchies and gaming performance benchmarks, the enterprise conversation has moved beyond raw compute to harness layers — Chase's term for the orchestration engines that make multi-model systems actually work. This isn't about ideological purity around open versus closed; it's about building systems that pragmatically combine both.
For developers, this means the action is shifting from model fine-tuning to system design. The teams building the best agent orchestration layers — not necessarily the best individual models — will capture the most value as AI moves from research demos to production workflows that actually solve business problems.
