Zubnet AIAprenderWiki › Backpropagation
Fundamentos

Backpropagation

Backprop, Backward Pass
El algoritmo que calcula cuánto contribuyó al error cada parámetro en una red neuronal, permitiendo que el gradient descent actualice los parámetros eficientemente. La backpropagation aplica la regla de la cadena del cálculo hacia atrás a través de la red: empezando desde la pérdida en la salida, propaga los gradientes hacia atrás a través de cada capa para determinar la parte de culpa de cada peso.

Por qué importa

La backpropagation es el algoritmo que hace posible el entrenamiento de redes neuronales. Sin una forma eficiente de calcular gradientes para miles de millones de parámetros, el gradient descent sería computacionalmente infactible. Cada modelo que usas — desde un clasificador pequeño hasta un LLM de 400B — fue entrenado usando backpropagation. Es el algoritmo más importante del deep learning.

Deep Dive

The forward pass: input flows through the network, each layer applies its transformation, and the final layer produces a prediction. The loss function computes how wrong the prediction is. The backward pass: starting from the loss, backpropagation computes ∂loss/∂weight for every weight in the network using the chain rule: ∂loss/∂w = ∂loss/∂output · ∂output/∂hidden · ∂hidden/∂w. Each layer receives the gradient from the layer above and passes its own gradient to the layer below.

Computational Efficiency

Naively computing the gradient for each weight independently would require a separate forward pass per weight — impossibly expensive for billions of parameters. Backpropagation reuses intermediate results: the gradient at each layer is computed once and shared with all weights in that layer. The backward pass costs roughly 2x the forward pass in compute, meaning the total cost of one training step (forward + backward + update) is about 3x a single forward pass.

Automatic Differentiation

Modern deep learning frameworks (PyTorch, JAX) implement backpropagation through automatic differentiation (autograd). You define the forward computation, and the framework automatically constructs the backward computation graph and computes gradients. This means you never manually derive gradients — you define the model architecture and loss, call loss.backward(), and the framework handles the rest. This automation is what makes rapid architecture experimentation practical.

Conceptos relacionados

← Todos los términos
← Autoregressive Batch Size & Epoch →