Zubnet AIAprenderWiki › Contamination
Fundamentos

Contamination

Data Contamination, Benchmark Leaking
Quando dados de teste de benchmark aparecem nos dados de treinamento de um modelo, inflando suas pontuações sem refletir capacidade genuína. Se um modelo “estudou o gabarito” ao ver perguntas de teste durante o treinamento, sua performance de benchmark é sem significado. Contaminação é um problema crescente enquanto datasets de treinamento crescem e fazem scrape de mais da internet, onde dados de benchmark frequentemente são publicados.

Por que importa

Contaminação mina todo o sistema de benchmark que a indústria IA usa para comparar modelos. Um modelo que pontua 90% no MMLU porque memorizou as respostas não é mais esperto que um que pontua 80% que nunca as viu. Enquanto mais benchmarks vazam para dados de treinamento, a comunidade é forçada a criar novos benchmarks constantemente, e avaliações privadas held-out se tornam mais importantes que leaderboards públicos.

Deep Dive

Contamination happens in several ways. Direct inclusion: benchmark data appears verbatim in the training corpus (often via web scraping sites that host benchmark questions). Indirect leakage: training data includes discussions about benchmark questions, model-generated solutions, or derivative content. Temporal leakage: a model is evaluated on a "new" benchmark, but the training data cutoff includes early versions of that benchmark.

Detection Is Hard

Detecting contamination isn't straightforward. You can search for exact matches of test questions in training data, but paraphrased or partial matches are harder to catch. Some researchers use membership inference attacks — checking if the model's confidence on test examples is suspiciously higher than on similar unseen examples. But these methods have false positives and negatives, and access to training data is often limited.

The Response

The community is responding in several ways: private held-out benchmarks that aren't published (like some internal evaluations at AI labs), dynamic benchmarks that generate new questions regularly, Chatbot Arena (which uses real user preferences rather than static test sets), and contamination analysis as a required part of model evaluation reports. The shift toward human evaluation and live benchmarks is partly driven by the contamination problem.

Conceitos relacionados

← Todos os termos
← Constitutional AI Content Moderation →