Zubnet AIAprenderWiki › Dataset
Fundamentos

Dataset

Training Set, Data
Una colección estructurada de datos usada para entrenar, evaluar o probar un modelo de machine learning. Los datasets pueden estar etiquetados (cada ejemplo tiene una respuesta correcta conocida) o no etiquetados (datos crudos sin anotaciones). La calidad, tamaño, diversidad y representatividad de un dataset determinan fundamentalmente lo que un modelo puede aprender.

Por qué importa

Basura entra, basura sale. La arquitectura más elegante entrenada con un mal dataset producirá malos resultados. A la inversa, un modelo simple entrenado con datos excelentes a menudo supera a un modelo complejo entrenado con ruido. La curación de datasets es posiblemente la parte más impactante y menos glamorosa del desarrollo de IA.

Deep Dive

Datasets come in many forms: text corpora for language models, labeled images for classifiers, question-answer pairs for fine-tuning, preference pairs for alignment, and benchmark datasets for evaluation. The distinction between training set (what the model learns from), validation set (what guides hyperparameter tuning), and test set (what measures final performance) is fundamental — evaluating on training data is meaningless because the model has memorized it.

The Data Scaling Story

LLM pre-training datasets have grown from millions of tokens (early GPT) to trillions (modern models). Common Crawl, Wikipedia, books, code repositories, scientific papers, and curated web text form the typical mix. But more data isn't always better — the Chinchilla scaling laws showed that data quality and quantity must scale together with model size. Deduplication, filtering toxic or low-quality content, and balancing domains are all critical steps.

Bias Lives in the Data

Every dataset carries the biases of its sources. A model trained mostly on English web text will perform worse on other languages. A dataset scraped from the internet inherits society's prejudices. This isn't a problem you can fix with architecture — it requires careful data curation, auditing, and post-training mitigation. The most impactful AI ethics work often happens at the dataset level.

Conceptos relacionados

← Todos los términos
← Data Centers Decart AI →