Cisco's latest survey reveals what those of us building AI agents already know: enterprise security isn't ready for autonomous AI that actually does things. The networking giant warns that AI agents capable of real business actions could cause "irreversible damage" because current access control and identity systems weren't designed for non-human actors making decisions at machine speed.
This isn't new technology breaking working systems — it's new technology exposing how broken our systems already are. I wrote in March about how enterprise identity barely works for humans, and now we're asking it to handle AI agents that might execute hundreds of actions per minute across multiple systems. The fundamental issue isn't AI capabilities; it's that enterprises built their security architecture assuming every action has a human behind it making deliberate choices.
What's missing from Cisco's warning is the practical reality: companies are already deploying these agents anyway. The choice isn't between perfect security and AI agents — it's between controlled deployment with proper guardrails versus shadow AI implementations that bypass IT entirely. The survey data would be more valuable if it included how many organizations are already running agentic AI in production despite these risks.
For developers building AI agents, this reinforces what we've been saying about human-in-the-loop designs. The solution isn't preventing AI agents from taking actions — it's designing systems where agents can act within defined boundaries, with rollback capabilities, and clear audit trails. Build assuming your security model is already compromised, because in most enterprises, it probably is.
