Zubnet AILearnWiki › Adam Optimizer
Training

Adam Optimizer

Adam, AdamW
The most widely used optimization algorithm for training neural networks. Adam (Adaptive Moment Estimation) combines momentum (using a running average of past gradients) with adaptive learning rates (scaling updates by the inverse of past gradient magnitudes). AdamW adds decoupled weight decay for better regularization. Nearly every modern LLM is trained with AdamW.

Why it matters

Adam works well across a wide range of tasks and hyperparameters, making it the default optimizer. Understanding it explains why training "just works" most of the time (Adam adapts per-parameter) and why it sometimes doesn't (Adam's memory requirements are 2x the model's parameters, which matters for large models). It's also the answer to "which optimizer should I use?" in 90% of cases.

Deep Dive

Adam maintains two moving averages per parameter: the first moment (mean of gradients — momentum) and the second moment (mean of squared gradients — adaptive scaling). The update rule: parameter -= lr × m̂ / (√v̂ + ε), where m̂ and v̂ are bias-corrected moments. Parameters with consistently large gradients get smaller updates (they're already well-calibrated). Parameters with small, noisy gradients get larger updates (they need more aggressive movement).

AdamW: The Fix

The original Adam applied weight decay by adding it to the gradient before computing moments, which caused the decay to be scaled by the adaptive learning rate — not what you want. AdamW (Loshchilov & Hutter, 2017) decouples weight decay from the gradient update, applying it directly to the parameters. This seems like a minor fix but significantly improves generalization. All modern LLM training uses AdamW.

Memory Cost

Adam stores two additional values per parameter (first and second moments), tripling the memory needed for optimizer state: a 70B model needs ~140 GB for weights (FP16) plus ~280 GB for Adam states (FP32), totaling ~420 GB. This is why optimizer state sharding (DeepSpeed ZeRO, FSDP) is essential for large model training. Some newer optimizers (Adafactor, CAME, Lion) reduce this memory overhead at some cost to stability.

Related Concepts

← All Terms
ESC