TechCrunch reported on April 29 that Scout AI — the defense-AI startup co-founded in August 2024 by Coby (Colby) Adcock and Collin Otis — has raised $100 million, with Booz Allen Ventures among the investors. The company emerged from stealth in April 2025 with a $15 million seed round and two DoD contracts; it now holds four DoD contracts and is competing for a fifth, per independent reporting. Adcock previously worked in tech private equity and sits on the Figure AI board; Otis was a founding engineer at Kodiak Robotics and Head of Data Science / Chief of Staff at Uber ATG. The product is called Fury — a Vision-Language-Action (VLA) foundation model purpose-built for defense robotics.

Fury is a VLA model — the same family of architecture that drives robot-control research at the major labs — fine-tuned for autonomous unmanned systems. The pitch is that a soldier issues a natural-language command ("scout the ridge to the north, return at first contact") and Fury decomposes the command across a fleet of unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs), coordinating their movement and sensor coverage as a single agentic system. Scout publicly showcased the Fury Autonomous Vehicle Orchestrator in February 2026 running a heterogeneous fleet of air and ground systems from natural-language mission intent. In a closed military test, the system "autonomously located and destroyed a target using an explosive drone strike, guided by a web of connected AI agents," per company-cited reporting. The phrasing — "autonomously located and destroyed" — is the part to read carefully. The vehicle of action was an explosive drone; the targeting was AI-driven; the human role in that test, beyond initial mission framing, is not specified in public materials.

Two patterns connect. First, Scout AI is taking the contracts Anthropic refused. Earlier this week we covered Anthropic's lawsuit against the Pentagon over its supply-chain-risk designation, which followed Anthropic's refusal to drop no-mass-surveillance and no-autonomous-weapons clauses. Scout is purpose-built for the use case Anthropic walked away from — applying VLA models to autonomous lethal systems. Booz Allen Ventures investing is a signal that the major US defense-services consulting firm is positioning for that to be a growing line of business. Second, the architectural shape — natural-language command → multi-agent decomposition → tool execution — is the same pattern we have been writing about all week in the civilian context (Anthropic creative connectors, Google Agents CLI, Slack agent context, OpenAI Codex). Scout is one of the first companies to apply that pattern explicitly to lethal autonomous systems. The MCP-style "give the agent deterministic tools" architecture is the same; the tools are different.

For builders, three concrete things. First, the VLA-foundation-model architecture used by Fury is increasingly the open-research direction for robotics — RT-2, Octo, OpenVLA, RT-X are public examples. If you build civilian robotics, the same model families that ship to you ship to defense contractors with different fine-tunes. There is no clean technical separation. Second, Scout is the venture-funded shape of the answer to Anthropic's refusal. If you are an AI engineer evaluating job offers, the funding gradient over the next year is going to point steeply toward defense applications — Booz Allen, Palantir, Anduril, and the Scout AIs of this batch will be hiring against the same talent pool as Anthropic, OpenAI, and Google. Have your own answer ready. Third, the "autonomously located and destroyed a target" framing in Scout's closed-test description sets a bar future regulatory debates will reference. The technical question — what fraction of the kill chain was AI versus human — is exactly what Pentagon contracts like the one Anthropic refused are trying to leave open.