Zubnet AI学习Wiki › 学习ing Rate Schedule
Training

学习ing Rate Schedule

LR Schedule, Warmup, Cosine Annealing
训练中改变学习率而不保持恒定的策略。大多数现代训练用 warmup(从接近零逐渐增加到峰值)后跟 decay(逐渐向零减少)。余弦退火是最常见的 decay schedule。学习率控制每个梯度更新步子多大 — 可以说是训练中最重要的超参数。

为什么重要

学习率 schedule 对不对能成就或毁掉一次训练。太高模型发散(loss 尖刺、训练失败)。太低训练太慢或卡住。Schedule 与 batch size、模型大小、数据交互 — 没有通用设置。理解学习率 schedule 帮你解读训练曲线、诊断训练问题。

Deep Dive

The standard LLM training schedule has three phases: (1) warmup: linearly increase the learning rate from ~0 to the peak value over the first 0.1–2% of training steps. This prevents the randomly initialized model from taking too-large steps early on. (2) Stable/peak: maintain the peak learning rate for the bulk of training. (3) Decay: decrease the learning rate following a cosine curve to near-zero by the end. This lets the model make fine-grained adjustments in the final phase.

Cosine Annealing

Cosine decay: lr(t) = lr_min + 0.5 · (lr_max − lr_min) · (1 + cos(π · t / T)), where t is the current step and T is the total steps. This produces a smooth curve that decreases slowly at first, then faster, then slowly again as it approaches the minimum. Why cosine? It works well empirically and avoids the abrupt transitions of step-based schedules. The final learning rate is typically 10x smaller than the peak.

The 学习ing Rate-Batch Size Relationship

The linear scaling rule: if you double the batch size, double the learning rate. This preserves the effective step size when the gradient estimate becomes more accurate (from the larger batch). The rule holds approximately for moderate batch sizes but breaks down at very large batches, where the optimal learning rate grows slower than linearly. Getting this relationship right is critical for distributed training where batch size scales with the number of GPUs.

相关概念

← 所有术语
← Layer Leonardo.ai →