Zubnet AILearnWiki › Learning Rate Schedule
Training

Learning Rate Schedule

LR Schedule, Warmup, Cosine Annealing
A strategy for changing the learning rate during training rather than keeping it constant. Most modern training uses warmup (gradually increase from near-zero to peak) followed by decay (gradually decrease toward zero). Cosine annealing is the most common decay schedule. The learning rate controls how large each gradient update step is — arguably the most important hyperparameter in training.

Why it matters

Getting the learning rate schedule right can make or break a training run. Too high and the model diverges (loss spikes, training fails). Too low and it trains too slowly or gets stuck. The schedule interacts with batch size, model size, and data — there's no universal setting. Understanding learning rate schedules helps you interpret training curves and diagnose training issues.

Deep Dive

The standard LLM training schedule has three phases: (1) warmup: linearly increase the learning rate from ~0 to the peak value over the first 0.1–2% of training steps. This prevents the randomly initialized model from taking too-large steps early on. (2) Stable/peak: maintain the peak learning rate for the bulk of training. (3) Decay: decrease the learning rate following a cosine curve to near-zero by the end. This lets the model make fine-grained adjustments in the final phase.

Cosine Annealing

Cosine decay: lr(t) = lr_min + 0.5 · (lr_max − lr_min) · (1 + cos(π · t / T)), where t is the current step and T is the total steps. This produces a smooth curve that decreases slowly at first, then faster, then slowly again as it approaches the minimum. Why cosine? It works well empirically and avoids the abrupt transitions of step-based schedules. The final learning rate is typically 10x smaller than the peak.

The Learning Rate-Batch Size Relationship

The linear scaling rule: if you double the batch size, double the learning rate. This preserves the effective step size when the gradient estimate becomes more accurate (from the larger batch). The rule holds approximately for moderate batch sizes but breaks down at very large batches, where the optimal learning rate grows slower than linearly. Getting this relationship right is critical for distributed training where batch size scales with the number of GPUs.

Related Concepts

← All Terms
← Layer Leonardo.ai →