Zubnet AIAprenderWiki › Early Stopping
Training

Early Stopping

Patience, Validation-Based Stopping
Parar el entrenamiento cuando el rendimiento en un validation set apartado deja de mejorar, en vez de entrenar por un número fijo de pasos. Mientras el entrenamiento continúa, la loss de entrenamiento sigue bajando pero la loss de validación eventualmente empieza a subir — el modelo está overfitting a los datos de entrenamiento. El early stopping atrapa este punto de inflexión y guarda el mejor modelo antes de que la calidad se degrade.

Por qué importa

El early stopping es la técnica de regularización más simple y efectiva para fine-tuning. Sin él, arriesgas entrenar demasiado tiempo y destruir las capacidades que querías preservar. Con él, el modelo se detiene automáticamente en su mejor punto. El parámetro «patience» (cuántas evaluaciones sin mejora antes de parar) es uno de los hiperparámetros más importantes en fine-tuning.

Deep Dive

The process: (1) split your data into training and validation sets, (2) evaluate on the validation set periodically during training, (3) track the best validation metric (loss, accuracy, F1), (4) if the metric hasn't improved for N evaluations (patience), stop training and revert to the checkpoint with the best validation score. This prevents the model from memorizing training data beyond the point where it helps generalization.

In LLM Fine-Tuning

For LLM fine-tuning, early stopping is especially important because catastrophic forgetting can destroy base model capabilities. A model fine-tuned for too long on customer support data might become great at support but lose its ability to do math or write code. Monitoring validation loss across multiple task types (not just the fine-tuning task) helps catch this. Typical fine-tuning runs are 1–5 epochs with patience of 2–3 evaluations.

Not Used in Pre-Training

Interestingly, LLM pre-training rarely uses early stopping. The training runs are so expensive and the datasets so large that models typically train for a predetermined number of tokens (based on scaling laws). Overfitting is rare during pre-training because the model usually never sees the same data twice. Early stopping is primarily a fine-tuning and classical ML technique.

Conceptos relacionados

← Todos los términos
ESC