Zubnet AIAprenderWiki › Aprendering Rate Schedule
Training

Aprendering Rate Schedule

LR Schedule, Warmup, Cosine Annealing
Uma estratégia para mudar o learning rate durante o treinamento em vez de mantê-lo constante. A maior parte do treinamento moderno usa warmup (aumento gradual de perto de zero ao pico) seguido de decay (decrescimento gradual para zero). Cosine annealing é o schedule de decay mais comum. O learning rate controla quão grande é cada passo de update de gradiente — possivelmente o hiperparâmetro mais importante em treinamento.

Por que importa

Acertar o schedule de learning rate pode fazer ou quebrar um run de treinamento. Alto demais e o modelo diverge (loss spika, treinamento falha). Baixo demais e treina muito lento ou fica preso. O schedule interage com batch size, tamanho de modelo e dados — não há setting universal. Entender schedules de learning rate te ajuda a interpretar curvas de treinamento e diagnosticar issues de treinamento.

Deep Dive

The standard LLM training schedule has three phases: (1) warmup: linearly increase the learning rate from ~0 to the peak value over the first 0.1–2% of training steps. This prevents the randomly initialized model from taking too-large steps early on. (2) Stable/peak: maintain the peak learning rate for the bulk of training. (3) Decay: decrease the learning rate following a cosine curve to near-zero by the end. This lets the model make fine-grained adjustments in the final phase.

Cosine Annealing

Cosine decay: lr(t) = lr_min + 0.5 · (lr_max − lr_min) · (1 + cos(π · t / T)), where t is the current step and T is the total steps. This produces a smooth curve that decreases slowly at first, then faster, then slowly again as it approaches the minimum. Why cosine? It works well empirically and avoids the abrupt transitions of step-based schedules. The final learning rate is typically 10x smaller than the peak.

The Aprendering Rate-Batch Size Relationship

The linear scaling rule: if you double the batch size, double the learning rate. This preserves the effective step size when the gradient estimate becomes more accurate (from the larger batch). The rule holds approximately for moderate batch sizes but breaks down at very large batches, where the optimal learning rate grows slower than linearly. Getting this relationship right is critical for distributed training where batch size scales with the number of GPUs.

Conceitos relacionados

← Todos os termos
← Layer Leonardo.ai →