A Teleport study of 205 security leaders reveals a stark pattern: enterprises granting excessive permissions to AI systems experience 4.5 times more security incidents than those practicing least privilege access. Organizations with broadly-permissioned AI reported 76% incident rates, while those limiting AI to task-specific access saw only 17%. The December 2025 survey found 92% of companies already run AI in production, with 59% reporting AI-related security incidents.

This isn't really about AI being dangerous—it's about decades of bad identity management finally breaking under AI's weight. "AI has broken the camel's back," says Teleport CEO Ev Kontsevoy, pointing to organizations with more roles than employees. When you give an AI agent broad credentials and it operates continuously across systems, any compromise spreads fast. Two-thirds of organizations still use static credentials for AI, correlating with 20% higher incident rates.

The data reveals a counterintuitive finding: the most confident organizations about their AI deployments experienced twice as many incidents. This suggests overconfidence breeds carelessness, or that mature AI adoption naturally increases attack surface. Only 3% of respondents have automated controls operating at machine speed—a glaring gap when dealing with systems that make decisions in milliseconds.

For developers building AI systems, this is a wake-up call about infrastructure fundamentals. Fine-grained permissions, dynamic credentials, and automated access controls aren't nice-to-haves anymore—they're essential guardrails. The incident rate gap is too dramatic to ignore: proper access controls reduce AI security incidents by 75%. That's not just a security win; it's a business imperative.