If you've shipped anything with an AI coding agent โ€” Claude Code, Cursor, Aider, Codex, internal tooling โ€” you've thought about what happens when the agent has shell access *and* your API keys *and* unrestricted network. "One compromised tool call leaks credentials to an attacker-controlled domain" isn't theoretical; it's the structural shape. Pipelock, an open-source firewall for AI agents released by Joshua Waldrep under the PipeLab project, addresses the gap with the architectural choice that matters: enforcement *outside* the agent process. Apache 2.0, ~20MB Go binary, currently at v2.3.0, on GitHub at luckyPipewrench/pipelock.

The capability split is the load-bearing design. Agent process holds secrets, has no direct network access. Proxy has network access, no secrets. Traffic crosses a scanning boundary between zones. Network isolation is enforced at the deployment layer โ€” network namespaces, iptables, Docker, Kubernetes NetworkPolicy โ€” not at the application layer where the agent could disable it. The 11-layer scanner pipeline covers credential exfiltration (48 patterns spanning API keys, tokens, financial accounts, crypto private keys, with four checksum validators), prompt injection (25 patterns with six normalization passes), SSRF, path traversal, per-domain DLP budgets, and response-side scans for zero-width characters, homoglyphs, and leetspeak encoding evasion. Coverage spans HTTP forward proxy, CONNECT tunnels, WebSocket frames, Model Context Protocol stdio, and Google's Agent-to-Agent protocol. Ed25519-signed evidence receipts, SARIF v2.1.0 integration into GitHub Code Scanning, compliance mappings to OWASP MCP Top 10, MITRE ATT&CK, EU AI Act, SOC 2, and NIST 800-53.

The framing Waldrep gives is the architectural argument the agent-security space has been needing: *"Most agent-security tools still need the agent to cooperate. Those controls only work while the agent keeps calling them."* That's the SDK-vs-proxy split. Every security-via-callback approach has the same structural weakness โ€” a compromised or jailbroken agent stops calling the safety library, and the controls evaporate. Out-of-process enforcement doesn't need cooperation; the network boundary is enforced by the OS, not by the agent. Expect this pattern to become standard for any production agent deployment with real credentials in scope. Pipelock isn't the only project moving in this direction (sandboxed execution layers, mTLS-enforced tool permissions, Anthropic's own Claude Code permission system) but it's the cleanest open-source articulation of the architectural shift.

If you run AI agents in production with shell access and credentialed API calls, this is the kind of thing that belongs between the agent and the network โ€” not as a replacement for in-agent guardrails, as a backstop for when those guardrails get bypassed. Apache 2.0 + Go binary + MCP + A2A coverage means it's deployable today. Read the threat model in the GitHub README before integrating; the per-domain DLP budgets and prompt-injection normalization passes need calibration to your use case. SARIF integration means findings wire into your existing security pipeline without rebuilding telemetry. The bigger move is structural: stop trusting the agent to police itself.