Zubnet AIसीखेंWiki › Residual Connection
मूल सिद्धांत

Residual Connection

Skip Connection, Shortcut Connection
एक connection जो input को directly output में add करके एक या ज़्यादा layers को bypass करती है: output = layer(x) + x। हर layer को complete transformation सीखने के बजाय, उसे सिर्फ “residual” सीखना होता है — identity function से difference। Residual connections हर Transformer layer में हैं और deep networks train करने के लिए essential हैं।

यह क्यों matter करता है

Residual connections के बिना, deep networks लगभग impossible हैं train करना — gradients कई layers के across vanish या explode होते हैं। Residual connections एक gradient highway provide करती हैं जो information (और gradients) को early layers से late layers तक directly flow करने देती हैं, किसी भी number of intermediate transformations को bypass करते हुए। यही वजह है कि हम 100+ layer networks train कर सकते हैं।

Deep Dive

Introduced in ResNet (He et al., 2015), residual connections solved the "degradation problem": deeper networks performed worse than shallow ones, not because of overfitting but because optimization became harder. The insight: it's easier to learn f(x) = 0 (the residual is nothing, just pass the input through) than to learn f(x) = x (reproduce the input perfectly). Residual connections make the identity function the default, and each layer only needs to learn useful modifications.

In Transformers

Every Transformer layer applies two residual connections: one around the attention sub-layer (x + attention(x)) and one around the feedforward sub-layer (x + ffn(x)). This means the input to layer 1 has a direct additive path to the output of layer 32 — it's added back at every step. This "residual stream" is a central concept in mechanistic interpretability: each layer reads from and writes to this shared stream, and the final output is the sum of all layers' contributions.

The Residual Stream View

Thinking of a Transformer as a residual stream with layers that read and write to it (rather than a sequential pipeline) changes how you understand the architecture. Attention layers move information between positions in the stream. FFN layers transform information at each position. The final output is the original input plus all the modifications from all layers. This view explains why you can often remove layers with limited impact — the residual stream preserves information even when individual layers are skipped.

संबंधित अवधारणाएँ

← सभी Terms
← Resemble AI Retrieval →