Zubnet AIAprenderWiki › Backpropagation
Fundamentos

Backpropagation

Backprop, Backward Pass
O algoritmo que computa quanto cada parâmetro em uma rede neural contribuiu para o erro, permitindo ao gradient descent atualizar parâmetros eficientemente. Backpropagation aplica a regra da cadeia do cálculo de trás para frente através da rede: começando da perda na saída, propaga os gradientes para trás através de cada camada para determinar a parte de culpa de cada peso.

Por que importa

Backpropagation é o algoritmo que torna o treinamento de redes neurais possível. Sem uma forma eficiente de computar gradientes para bilhões de parâmetros, gradient descent seria computacionalmente inviável. Cada modelo que você usa — de um classificador pequeno até um LLM de 400B — foi treinado usando backpropagation. É o algoritmo mais importante do deep learning.

Deep Dive

The forward pass: input flows through the network, each layer applies its transformation, and the final layer produces a prediction. The loss function computes how wrong the prediction is. The backward pass: starting from the loss, backpropagation computes ∂loss/∂weight for every weight in the network using the chain rule: ∂loss/∂w = ∂loss/∂output · ∂output/∂hidden · ∂hidden/∂w. Each layer receives the gradient from the layer above and passes its own gradient to the layer below.

Computational Efficiency

Naively computing the gradient for each weight independently would require a separate forward pass per weight — impossibly expensive for billions of parameters. Backpropagation reuses intermediate results: the gradient at each layer is computed once and shared with all weights in that layer. The backward pass costs roughly 2x the forward pass in compute, meaning the total cost of one training step (forward + backward + update) is about 3x a single forward pass.

Automatic Differentiation

Modern deep learning frameworks (PyTorch, JAX) implement backpropagation through automatic differentiation (autograd). You define the forward computation, and the framework automatically constructs the backward computation graph and computes gradients. This means you never manually derive gradients — you define the model architecture and loss, call loss.backward(), and the framework handles the rest. This automation is what makes rapid architecture experimentation practical.

Conceitos relacionados

← Todos os termos
← Autoregressive Batch Size & Epoch →