Zubnet AIसीखेंWiki › Contamination
मूल सिद्धांत

Contamination

Data Contamination, Benchmark Leaking
जब benchmark test data एक model के training data में appear होता है, genuine capability reflect किए बिना इसके scores inflate करते हुए। अगर एक model ने training के दौरान test questions देखकर “answer key study” कर लिया, तो इसकी benchmark performance meaningless है। Contamination एक growing problem है जैसे-जैसे training datasets बड़े होते हैं और ज़्यादा internet scrape करते हैं, जहाँ benchmark data अक्सर published होता है।

यह क्यों matter करता है

Contamination पूरे benchmark system को undermine करती है जिसे AI industry models compare करने के लिए use करती है। एक model जो MMLU पर 90% score करता है क्योंकि उसने answers memorize कर लिए, वो एक 80% score करने वाले से smarter नहीं है जिसने उन्हें कभी नहीं देखा। जैसे-जैसे ज़्यादा benchmarks training data में leak होते हैं, community को constantly नए benchmarks create करने के लिए forced किया जाता है, और private held-out evaluations public leaderboards से ज़्यादा important हो जाती हैं।

Deep Dive

Contamination happens in several ways. Direct inclusion: benchmark data appears verbatim in the training corpus (often via web scraping sites that host benchmark questions). Indirect leakage: training data includes discussions about benchmark questions, model-generated solutions, or derivative content. Temporal leakage: a model is evaluated on a "new" benchmark, but the training data cutoff includes early versions of that benchmark.

Detection Is Hard

Detecting contamination isn't straightforward. You can search for exact matches of test questions in training data, but paraphrased or partial matches are harder to catch. Some researchers use membership inference attacks — checking if the model's confidence on test examples is suspiciously higher than on similar unseen examples. But these methods have false positives and negatives, and access to training data is often limited.

The Response

The community is responding in several ways: private held-out benchmarks that aren't published (like some internal evaluations at AI labs), dynamic benchmarks that generate new questions regularly, Chatbot Arena (which uses real user preferences rather than static test sets), and contamination analysis as a required part of model evaluation reports. The shift toward human evaluation and live benchmarks is partly driven by the contamination problem.

संबंधित अवधारणाएँ

← सभी Terms
← Constitutional AI Content Moderation →