Kumar Ravi, Chief Security & Resilience Officer at TMF Group, is pushing back against the AI vendor sales pitch that focuses on capabilities over controls. In a new interview, Ravi argues that managing partners evaluating AI tools are asking the wrong questions—obsessing over ransomware protection while ignoring "gradual and imperceptible" threats like over-privileged access and weak workflow controls that "accumulate across multiple processes, teams, systems and applications."
This cuts to the heart of AI procurement dysfunction. While vendors demo impressive features and enterprise buyers focus on obvious attack vectors, the real risk sits in boring identity management and data governance. Ravi's point about legal privilege creating information-sharing bottlenecks is particularly sharp—firms increasingly "treat all data points as privileged," which "can slow down and jeopardise timely information-sharing" with regulators and peers who need "speedy, specific and actionable insights."
What's missing from most AI vendor conversations is the unsexy stuff that actually matters: Who has access to what data? How are permissions managed across AI systems? What happens when your AI tool integrates with existing workflows that already have privilege creep? Ravi's broader argument—that security needs board-level measurement and independent assurance—suggests most organizations are buying AI tools without understanding their actual security posture.
For developers integrating AI APIs and tools, this means auditing not just the AI vendor's security claims, but how their tools will interact with your existing access controls. The flashy AI demo won't show you the permission sprawl that happens six months after deployment." "tags": ["security", "enterprise", "data-governance", "vendor-risk
