NVIDIA CEO Jensen Huang told Lex Fridman that artificial general intelligence has already arrived, but his reasoning exposes how flexibly AI leaders define breakthrough milestones. When Fridman suggested AGI would mean an AI capable of building and running a billion-dollar tech company, Huang immediately agreed—then narrowed the criteria. "You said a billion," Huang replied, "and you didn't say forever." In his view, an AI just needs to hit that valuation once, not sustain a business, manage people, or navigate complex organizational challenges.

This isn't Huang's first definitional pivot. At the 2023 New York Times DealBook Summit, he defined AGI as software passing tests "that approximate normal human intelligence at a reasonably competitive level" within five years. Now he's claiming we're already there by cherry-picking the most generous interpretation possible. For a CEO whose $4 trillion company depends on AI infrastructure demand, declaring mission accomplished serves obvious strategic purposes—it validates current investment while setting expectations that future breakthroughs are incremental improvements rather than fundamental leaps.

The pattern here matters more than any single claim. As AI companies burn unprecedented capital while facing mounting pressure to deliver on promises, the goalposts for "success" keep shifting toward whatever makes current capabilities look sufficient. Huang's framing reflects an industry-wide problem: AGI has become a marketing term that means whatever helps justify the next funding round or stock valuation, not a technical milestone with consistent criteria.

For developers building with current AI tools, this definitional chaos creates real planning challenges. If industry leaders can't agree on what constitutes breakthrough capability, how do you architect systems for genuine long-term needs versus today's impressive but limited models? The safest bet remains building for incremental capability improvements rather than betting on sudden leaps to human-level reasoning.