Mozilla developer Peter Wilson released cq, a "Stack Overflow for agents" that lets AI coding assistants share knowledge about APIs, frameworks, and debugging solutions. The system works by having agents query a commons before tackling unfamiliar work—if another agent already learned that Stripe returns 200 with an error body for rate-limited requests, your agent knows that upfront. When agents discover something new, they propose it back to the commons, with other agents confirming what works and flagging stale information. Wilson released it as a proof of concept with plugins for Claude Code and OpenCode, plus an MCP server and API for teams.
This addresses a real pain point I've seen building AI infrastructure—agents constantly burn tokens rediscovering the same workarounds because training data cuts off months ago and RAG doesn't catch every "unknown unknown." We wrote about agents' reliability problems in March, and cq tackles the knowledge sharing piece directly. But Wilson's framing of "knowledge earns trust through use, not authority" glosses over the core challenge: how do you prevent one bad agent from poisoning the commons with incorrect solutions that seem to work?
The Hacker News discussion reveals the skepticism you'd expect—developers worry about data poisoning, accuracy validation, and whether this just creates a new attack vector. The current alternative of maintaining agents.md files doesn't scale, but at least you control what goes in them. Wilson acknowledges these are major unsolved problems, which is refreshingly honest for a Mozilla.ai blog post.
For teams running multiple agents, cq could cut token costs and reduce the constant debugging of why agents keep trying deprecated APIs. But I'd wait to see how they handle verification before trusting a shared commons with production systems. The concept is sound—we need better knowledge sharing for agents—but the execution needs serious work on trust and validation mechanisms." "tags": ["agents", "mozilla", "knowledge-sharing", "developer-tools
