Zubnet AILearnWiki › Model Collapse
Training

Model Collapse

Data Feedback Loop
The degradation that occurs when AI models are trained on data generated by previous AI models, creating a feedback loop where errors and biases accumulate across generations. Each generation loses some diversity and amplifies some artifacts from the previous one, eventually producing models that generate repetitive, generic, or distorted outputs.

Why it matters

Model collapse is the ticking time bomb of the AI-generated content era. As the internet fills with AI-generated text (estimated at 10–50% of new web content), future models trained on web scrapes will inevitably ingest AI outputs. If this isn't carefully managed, model quality could plateau or degrade. It's why data curation and provenance tracking are becoming critical infrastructure.

Deep Dive

The mechanism: a model trained on real data captures the distribution imperfectly — it overestimates some patterns and misses others. When a second model trains on the first model's outputs, it captures the first model's imperfect distribution, amplifying the errors. By generation 5 or 10, the distribution has collapsed to a narrow, distorted version of the original. Shumailov et al. (2023) demonstrated this empirically across multiple model types.

The Internet Contamination Problem

The practical concern: pre-training datasets are typically scraped from the web, and the web increasingly contains AI-generated content. If 20% of a training corpus is AI-generated, and that AI content has the same statistical biases as the model being trained, those biases get reinforced. The result isn't catastrophic failure but gradual homogenization — models that sound more and more like each other and less like the diversity of human expression.

Mitigations

Solutions include: detecting and filtering AI-generated content from training data (hard at scale), mixing AI-generated data with verified human data (maintaining a "human data floor"), watermarking AI outputs to enable filtering, and maintaining curated, AI-free reference datasets. Some researchers argue that model collapse is overstated if data is properly diversified and quality-controlled, but the risk is taken seriously enough that major labs invest in data provenance.

Related Concepts

← All Terms
← Model Card Model Merging →