Zubnet AIAprenderWiki › Validation Set
Training

Validation Set

Dev Set, Hold-Out Set
Um subconjunto de dados retido do treinamento, usado para avaliar performance do modelo durante desenvolvimento e ajustar hiperparâmetros. O split de três vias: o training set treina o modelo, o validation set guia decisões sobre o modelo (learning rate, arquitetura, quando parar), e o test set provê a estimativa final e não-enviesada de performance. O validation set é seu espelho durante o desenvolvimento.

Por que importa

Sem um validation set, você está voando cego. A loss de treinamento te diz quão bem o modelo ajusta aos dados de treinamento, mas não quão bem ele generaliza. O validation set responde a pergunta que realmente importa: “como esse modelo vai performar em dados que não viu?” Toda decisão durante o desenvolvimento de modelos — hiperparâmetros, escolhas de arquitetura, duração de treinamento — deveria ser avaliada no validation set.

Deep Dive

Typical splits: 80% training, 10% validation, 10% test. For large datasets, smaller percentages for validation and test suffice (even 1% of a million examples is 10,000 — plenty for reliable evaluation). For small datasets, cross-validation is preferred (see: Cross-Validation). The key rule: never use the test set for any decision during development. It's only for the final evaluation. If you peek at the test set during development, your performance estimate becomes biased.

Stratification

When splitting data, ensure each split has a representative distribution of classes, domains, and other important characteristics. If your dataset is 90% English and 10% French, a random split might put all French examples in the training set, leaving you unable to evaluate French performance. Stratified splitting ensures proportional representation in each split. For time-series data, use temporal splits (train on past, validate on future) rather than random splits.

Validation in LLM Development

For LLM pre-training, the validation set is a held-out portion of the training corpus, used to compute perplexity during training. For fine-tuning, it's a held-out portion of the fine-tuning dataset. For alignment (RLHF/DPO), validation is more complex: automated metrics (reward model scores) plus human evaluation on held-out prompts. The validation strategy should match how the model will actually be used — if users will ask diverse questions, the validation set should be diverse.

Conceitos relacionados

← Todos os termos
ESC