Zubnet AILearnWiki › Residual Connection
Fundamentals

Residual Connection

Skip Connection, Shortcut Connection
A connection that bypasses one or more layers by adding the input directly to the output: output = layer(x) + x. Instead of each layer learning a complete transformation, it only needs to learn the "residual" — the difference from the identity function. Residual connections are in every Transformer layer and are essential for training deep networks.

Why it matters

Without residual connections, deep networks are nearly impossible to train — gradients vanish or explode across many layers. Residual connections provide a gradient highway that lets information (and gradients) flow directly from early layers to late layers, bypassing any number of intermediate transformations. They're why we can train 100+ layer networks at all.

Deep Dive

Introduced in ResNet (He et al., 2015), residual connections solved the "degradation problem": deeper networks performed worse than shallow ones, not because of overfitting but because optimization became harder. The insight: it's easier to learn f(x) = 0 (the residual is nothing, just pass the input through) than to learn f(x) = x (reproduce the input perfectly). Residual connections make the identity function the default, and each layer only needs to learn useful modifications.

In Transformers

Every Transformer layer applies two residual connections: one around the attention sub-layer (x + attention(x)) and one around the feedforward sub-layer (x + ffn(x)). This means the input to layer 1 has a direct additive path to the output of layer 32 — it's added back at every step. This "residual stream" is a central concept in mechanistic interpretability: each layer reads from and writes to this shared stream, and the final output is the sum of all layers' contributions.

The Residual Stream View

Thinking of a Transformer as a residual stream with layers that read and write to it (rather than a sequential pipeline) changes how you understand the architecture. Attention layers move information between positions in the stream. FFN layers transform information at each position. The final output is the original input plus all the modifications from all layers. This view explains why you can often remove layers with limited impact — the residual stream preserves information even when individual layers are skipped.

Related Concepts

← All Terms
← Resemble AI Retrieval →