The standard LLM training schedule has three phases: (1) warmup: linearly increase the learning rate from ~0 to the peak value over the first 0.1–2% of training steps. This prevents the randomly initialized model from taking too-large steps early on. (2) Stable/peak: maintain the peak learning rate for the bulk of training. (3) Decay: decrease the learning rate following a cosine curve to near-zero by the end. This lets the model make fine-grained adjustments in the final phase.
Cosine decay: lr(t) = lr_min + 0.5 · (lr_max − lr_min) · (1 + cos(π · t / T)), where t is the current step and T is the total steps. This produces a smooth curve that decreases slowly at first, then faster, then slowly again as it approaches the minimum. Why cosine? It works well empirically and avoids the abrupt transitions of step-based schedules. The final learning rate is typically 10x smaller than the peak.
The linear scaling rule: if you double the batch size, double the learning rate. This preserves the effective step size when the gradient estimate becomes more accurate (from the larger batch). The rule holds approximately for moderate batch sizes but breaks down at very large batches, where the optimal learning rate grows slower than linearly. Getting this relationship right is critical for distributed training where batch size scales with the number of GPUs.