Zubnet AIसीखेंWiki › Feedforward Network
मूल सिद्धांत

Feedforward Network

FFN, MLP Block
हर Transformer layer में वो component जो हर token को independently दो linear transformations के through process करता है जिनके बीच में एक activation function होता है। जबकि attention tokens के across information mix करती है (कौन से tokens किस से relate करते हैं), feedforward network हर token की representation को individually process करता है, non-linear transformations apply करते हुए जो knowledge encode करती हैं और computation perform करती हैं।

यह क्यों matter करता है

Feedforward network वो जगह है जहाँ Transformer की अधिकांश knowledge store है। Attention को सारी glory मिलती है, लेकिन FFN layers model के majority parameters contain करती हैं (typically total parameters का 2/3) और वो जगह हैं जहाँ factual associations, language patterns, और learned computations primarily reside करते हैं। ये समझना knowledge editing और model pruning जैसे phenomena explain करने में help करता है।

Deep Dive

The standard FFN: FFN(x) = W2 · activation(W1 · x + b1) + b2, where W1 projects from the model dimension to a larger intermediate dimension (typically 4x), the activation function introduces non-linearity, and W2 projects back to the model dimension. Each position (token) passes through this independently — the FFN doesn't see other tokens, only the attention layer does.

SwiGLU and Gated Variants

Modern LLMs (LLaMA, Mistral, etc.) use SwiGLU instead of the standard FFN: SwiGLU(x) = (W1 · x · SiLU) ⊗ (W3 · x). This adds a third weight matrix (W3) and a gating mechanism that lets the network control what information passes through. Despite the extra parameters, it performs better at equivalent compute, so the intermediate dimension is adjusted down to compensate. This is a case where a slightly more complex component improves the whole system.

Knowledge Storage

Research suggests that FFN layers function like key-value memories: the first linear layer (W1) detects patterns in the input (keys), and the second linear layer (W2) maps those patterns to output updates (values). "The Eiffel Tower is in" activates specific neurons in W1, which through W2 promote the token "Paris." This key-value interpretation explains why FFN layers store factual knowledge and why knowledge editing techniques can modify specific facts by updating specific FFN weights.

संबंधित अवधारणाएँ

← सभी Terms
← Federated सीखेंing Few-Shot सीखेंing →