Meta has indefinitely paused work with AI training contractor Mercor after the startup was compromised through a supply chain attack on LiteLLM, the popular open-source library for managing multiple AI models. Attackers used stolen maintainer credentials to publish malicious versions 1.82.7 and 1.82.8 to PyPI for roughly 40 minutes â brief by human standards, but long enough to infiltrate thousands of companies that auto-update dependencies. Mercor, which facilitates over $2 million in daily payouts connecting AI labs with human contractors for model training and evaluation, confirmed it was among the affected companies.
This breach exposes a critical vulnerability in AI infrastructure that goes far beyond any single startup. Mercor sits at a chokepoint in the AI development pipeline, handling sensitive workflows between major labs like OpenAI and Anthropic and the human experts who train their models. When trusted intermediaries get compromised through common dependencies, the blast radius can reach customers who never directly interacted with the vulnerable software. It's the same pattern we've seen in traditional supply chain attacks, but now it's hitting the operational layer that powers AI development.
While Lapsus$ has claimed to have stolen over 4TB of data, Mercor hasn't validated that claim. What's confirmed is more concerning: OpenAI is investigating whether proprietary training data was exposed, and other major AI labs are reevaluating their Mercor relationships. The forensic review is ongoing, but the damage to trust is already done. For an industry built on managing vast amounts of sensitive training data through complex vendor relationships, this is exactly the kind of breach that forces uncomfortable questions about third-party risk management.
For developers building on AI infrastructure, this is a wake-up call about dependency hygiene. LiteLLM is used everywhere because it abstracts away the complexity of working with multiple AI providers â exactly the kind of foundational library that becomes invisible until it breaks. Pin your dependencies, audit your supply chain, and assume that anything touching your AI workflows could become a vector for compromise." "tags": ["supply-chain", "security", "litellm", "infrastructure
