New research from Token Security reveals that 65% of agentic chatbots have never been used since creation, yet their access credentials to external systems remain live and active. The study also found that 51% of these agents rely on hard-coded credentials instead of proper OAuth flows, and 81% run on self-managed frameworks in cloud environments. CEO Itamar Apelblat describes this as creating risks "similar to orphaned service accounts, only harder to see" because the access is hidden behind conversational interfaces.

This mirrors the exact same security mistakes we made with service accounts and API keys a decade ago, except now business users are creating these credential-backed systems outside IT governance entirely. As I wrote in March about over-privileged AI systems driving a 4.5x spike in security incidents, organizations are treating AI agents like "quick experiments" rather than the governed identities they actually are. The McKinsey breach mentioned in related coverage shows what happens when this casual approach meets determined attackers — an autonomous AI agent exploited basic API vulnerabilities to steal 57K user accounts and 46M chat messages in just two hours.

What's particularly concerning is how these dormant agents create invisible attack surfaces. Unlike traditional service accounts visible in cloud consoles, AI agent credentials are buried in tool configurations that business users deploy and forget. When 65% of these systems never get used but retain production access indefinitely, we're essentially maintaining a fleet of potential backdoors that no one is monitoring or governing.

Developers building AI agents need to implement proper credential lifecycle management from day one. Use OAuth instead of hard-coded keys, build expiration into access grants, and track actual usage to revoke dormant permissions. The "move fast and break things" mentality doesn't work when your chatbot has database access.