Teleport CEO Ev Kontsevoy argues that AI safety discussions are missing the real problem: enterprise identity systems built for humans can't handle AI agents. While humans log in predictably and work slowly enough for security gaps to be manageable, AI agents "don't sleep, don't follow predictable paths, and can move across your infrastructure in seconds," Kontsevoy told Help Net Security. Most organizations are plugging these non-deterministic actors into the same fragmented identity systems they already struggle to manage—static credentials, over-scoped access, and minimal real-time visibility.
This connects directly to my recent piece on deliberately broken AI in high-stakes decisions. While I focused on intentional failures as accountability mechanisms, Kontsevoy is highlighting unintentional failures from infrastructure that was never designed for AI's speed and unpredictability. The identity sprawl that enterprises accumulated over decades—too many roles, credentials, and disconnected tools—becomes exponentially more dangerous when AI agents can exploit those gaps in seconds rather than hours.
Kontsevoy claims Teleport has solved this with unified identity layers that treat humans, machines, and AI agents as "first-class identities" with short-lived, continuously validated access. But he admits the real barrier is conceptual: organizations still bolt identity onto infrastructure rather than building it as infrastructure. The technical primitives exist—cryptographic credentials, policy governance, real-time verification—but every platform implements them differently, creating the same fragmentation at higher velocity.
For developers building AI agents, this matters immediately. Before you worry about alignment or safety at the model level, audit what your agents can actually access. Most production AI failures won't come from the AI being too smart—they'll come from enterprise identity systems that were already broken, just moving too slowly for anyone to notice.
