Zubnet AI学习Wiki › Backpropagation
基础

Backpropagation

Backprop, Backward Pass
计算神经网络中每个参数对误差贡献了多少的算法,使梯度下降能高效更新参数。反向传播把微积分的链式法则反过来应用到网络上:从输出端的损失开始,向后传播梯度穿过每一层,以确定每个权重应当承担的“责任”。

为什么重要

反向传播是让神经网络训练成为可能的算法。没有一种高效计算数十亿参数梯度的办法,梯度下降在计算上就不可行。你使用的每个模型 — 从一个小分类器到 400B 的 LLM — 都是用反向传播训练的。它是深度学习中最重要的一个算法。

Deep Dive

The forward pass: input flows through the network, each layer applies its transformation, and the final layer produces a prediction. The loss function computes how wrong the prediction is. The backward pass: starting from the loss, backpropagation computes ∂loss/∂weight for every weight in the network using the chain rule: ∂loss/∂w = ∂loss/∂output · ∂output/∂hidden · ∂hidden/∂w. Each layer receives the gradient from the layer above and passes its own gradient to the layer below.

Computational Efficiency

Naively computing the gradient for each weight independently would require a separate forward pass per weight — impossibly expensive for billions of parameters. Backpropagation reuses intermediate results: the gradient at each layer is computed once and shared with all weights in that layer. The backward pass costs roughly 2x the forward pass in compute, meaning the total cost of one training step (forward + backward + update) is about 3x a single forward pass.

Automatic Differentiation

Modern deep learning frameworks (PyTorch, JAX) implement backpropagation through automatic differentiation (autograd). You define the forward computation, and the framework automatically constructs the backward computation graph and computes gradients. This means you never manually derive gradients — you define the model architecture and loss, call loss.backward(), and the framework handles the rest. This automation is what makes rapid architecture experimentation practical.

相关概念

← 所有术语
← Autoregressive Batch Size & Epoch →