Zubnet AI学习Wiki › Gradient Descent
Training

Gradient Descent

SGD, Stochastic Gradient Descent, Backpropagation
通过迭代调整参数以减少损失函数来训练神经网络的算法。它的工作方式是:计算损失相对于每个参数的梯度(最陡增长方向),然后把每个参数往相反方向(下坡)移动一小步。反向传播是用来高效计算这些梯度穿过网络各层的技术。

为什么重要

梯度下降是所有深度学习引擎盖下的引擎。你用的每个模型 — 每个 LLM、每个图像生成器、每个 embedding 模型 — 都是用梯度下降训练的。理解它能帮你理解训练动态:为什么学习率重要、为什么训练会发散或卡住、为什么 Adam 这样的现代优化器比朴素的梯度下降更好。

Deep Dive

The full algorithm: (1) take a batch of training examples, (2) run them through the model to get predictions, (3) compute the loss, (4) use backpropagation to compute the gradient of the loss with respect to every parameter, (5) update each parameter by subtracting the gradient times a learning rate, (6) repeat. In practice, "stochastic" gradient descent (SGD) uses random mini-batches rather than the full dataset, which is both computationally necessary (the full dataset doesn't fit in memory) and beneficial (the noise from random batches helps escape local minima).

Adam and Modern Optimizers

Plain SGD is rarely used today. Adam (Adaptive Moment Estimation) maintains a running average of both the gradient and its squared magnitude for each parameter, effectively giving each parameter its own adaptive learning rate. Parameters with consistently large gradients get smaller updates (they're already well-calibrated), while parameters with small, noisy gradients get larger updates (they need more aggressive movement). AdamW adds weight decay for regularization. Most LLM training uses AdamW or variants.

The 学习ing Rate

The learning rate is arguably the single most important hyperparameter in training. Too high and the model overshoots the minimum, loss diverges, and training fails. Too low and training takes forever or gets stuck. Modern training uses learning rate schedules: start with a warmup phase (gradually increasing from near-zero), reach a peak, then decay (cosine annealing is common). The peak learning rate, warmup duration, and decay schedule all interact with batch size and model architecture. Getting this right is a significant part of training large models.

相关概念

← 所有术语
← Gradient Checkpointing Groq →