Security operations center vendors are promising autonomous threat investigation and "humanless operations," but practitioners describe a reality of constrained deployments and shallow AI integration. A new report from Google Cloud's Anton Chuvakin and Aunoo AI's Oliver Rochford surveyed 30+ vendor briefings and interviewed CISOs running AI SOC tools in production. Market adoption sits at just 1-5 percent according to Gartner's 2025 Hype Cycle, with most teams waiting for AI capabilities to be built into existing SIEM and XDR platforms rather than buying standalone solutions.
What's actually happening in production reveals the gap between vendor promises and reality. Teams deploy AI for alert enrichment, investigation summaries, and report drafting — but keep humans in the decision loop for anything that matters. The report identifies "pilot purgatory" as a common pattern: proof-of-value converts to small production deployment, AI handles low-stakes tasks, expansion never comes. Some mature detection engineering teams report that well-prompted general-purpose LLMs with access to internal documentation outperform vendor products that lack environmental context.
The metrics don't support vendor adoption claims either. "Exposure" isn't "trust," Rochford notes — analysts may see AI-generated summaries on every alert, but that doesn't mean they act on them. Chuvakin calls for sharper definitions around performance claims: "'50% faster investigations' could mean almost anything." Vendor statistics often count feature activation or survey responses about "exploring" AI tools, not actual operational reliance.
For security teams evaluating AI SOC tools: start narrow, measure actual analyst behavior changes, not just feature usage. The autonomous future vendors are selling isn't here yet, and buying decisions based on roadmap promises rather than current capabilities is expensive wishful thinking.
