Anthropic's attempt to contain its leaked Claude Code source ended up nuking 8,100 legitimate GitHub repositories this week, including many forks of the company's own official public repo. The DMCA takedown, filed Tuesday, initially targeted 96 specific repositories containing leaked code. But GitHub's automated network processing expanded the dragnet to hit thousands of innocent developers who had simply forked Anthropic's legitimate public Claude Code repository — the one the company actively encourages people to fork for bug reports and contributions.

This mess highlights how unprepared AI companies are for handling major leaks. As I covered yesterday, Anthropic's entire Claude Code CLI source leaked, exposing internal architecture details that competitors are already analyzing. The DMCA overreach shows they're now scrambling with blunt legal instruments that don't match the technical reality of how code spreads. When your lawyers can't distinguish between your own public repo and leaked private code, you've got bigger problems than the leak itself.

The damage control was swift but limited. Anthropic walked back the broad takedown by Wednesday, with executives calling it a "communication mistake." But copies of the leaked code remain easily findable on GitHub and have migrated to platforms like Germany's Codeberg, beyond US DMCA reach. Multiple developers are already using AI tools to create "clean room" reimplementations based on the leaked architecture.

For developers, this reinforces why you should maintain local copies of any code you fork — even from major AI companies. DMCA automation is getting more aggressive, and false positives are inevitable when legal teams try to solve technical problems with legal hammers.