Zubnet AI学习Wiki › Early Stopping
Training

Early Stopping

Patience, Validation-Based Stopping
在留出的验证集上性能停止改善时停止训练,而不是训练固定步数。训练继续,训练 loss 继续下降,但验证 loss 最终开始上升 — 模型在训练数据上过拟合。早停抓住这个拐点,在质量退化前保存最好的模型。

为什么重要

早停是 fine-tuning 最简单、最有效的正则化技术。没有它,你冒险训练太久、摧毁你想保留的能力。有了它,模型自动在最佳点停止。“patience”参数(停止前没有改善的评估次数)是 fine-tuning 中最重要的超参数之一。

Deep Dive

The process: (1) split your data into training and validation sets, (2) evaluate on the validation set periodically during training, (3) track the best validation metric (loss, accuracy, F1), (4) if the metric hasn't improved for N evaluations (patience), stop training and revert to the checkpoint with the best validation score. This prevents the model from memorizing training data beyond the point where it helps generalization.

In LLM Fine-Tuning

For LLM fine-tuning, early stopping is especially important because catastrophic forgetting can destroy base model capabilities. A model fine-tuned for too long on customer support data might become great at support but lose its ability to do math or write code. Monitoring validation loss across multiple task types (not just the fine-tuning task) helps catch this. Typical fine-tuning runs are 1–5 epochs with patience of 2–3 evaluations.

Not Used in Pre-Training

Interestingly, LLM pre-training rarely uses early stopping. The training runs are so expensive and the datasets so large that models typically train for a predetermined number of tokens (based on scaling laws). Overfitting is rare during pre-training because the model usually never sees the same data twice. Early stopping is primarily a fine-tuning and classical ML technique.

相关概念

← 所有术语
ESC