Two AI security startups emerged from stealth this week with nearly identical pitches: using AI agents to secure code that was probably generated by AI in the first place. Gitar raised $9 million to review AI-generated code, while London-based Trent AI pulled in $13 million from LocalGlobe and Cambridge Innovation Capital for what they call a "multi-agent security solution" that secures AI agents throughout their lifecycle.

This feels like peak 2026 AI logic — we're now at the point where we need AI to watch AI that's writing code for humans who increasingly don't understand what the AI wrote. Trent AI's founders, former AWS engineers, at least seem to grasp the recursive complexity they're dealing with. Their platform promises agents that "work together continuously" to scan models, analyze risks, patch vulnerabilities, and validate fixes across the entire development workflow.

What's telling is how differently these companies are positioning essentially the same problem. While Gitar focuses narrowly on code review, Trent AI is going broader with "layered platform" language that suggests they understand this isn't just about catching bugs — it's about securing entire autonomous systems that are making decisions without human oversight. Meanwhile, open-source projects like "The Agency" are already giving developers specialized AI agents for everything from frontend development to Reddit management, showing how quickly this multi-agent approach is becoming table stakes.

For developers, this represents both opportunity and exhaustion. Yes, AI agents can probably catch security issues human reviewers miss. But we're also adding another layer of AI complexity to debug when things go wrong. The real test will be whether these security agents can explain their decisions clearly enough for humans to trust them — or whether we're just building an AI house of cards." "tags": ["security", "agents", "funding", "code-review