Zubnet AI学习Wiki › Feedforward Network
基础

Feedforward Network

FFN, MLP Block
每个 Transformer 层中独立处理每个 token、通过两个带激活函数的线性变换的组件。注意力跨 token 混合信息(哪些 token 相关),而前馈网络独立处理每个 token 的表示,应用编码知识并执行计算的非线性变换。

为什么重要

前馈网络是 Transformer 大部分知识存储的地方。注意力得到所有荣耀,但 FFN 层包含模型的大部分参数(通常占总参数的 2/3),是事实关联、语言模式、学到的计算主要存放的地方。理解这个帮助解释像知识编辑和模型剪枝这样的现象。

Deep Dive

The standard FFN: FFN(x) = W2 · activation(W1 · x + b1) + b2, where W1 projects from the model dimension to a larger intermediate dimension (typically 4x), the activation function introduces non-linearity, and W2 projects back to the model dimension. Each position (token) passes through this independently — the FFN doesn't see other tokens, only the attention layer does.

SwiGLU and Gated Variants

Modern LLMs (LLaMA, Mistral, etc.) use SwiGLU instead of the standard FFN: SwiGLU(x) = (W1 · x · SiLU) ⊗ (W3 · x). This adds a third weight matrix (W3) and a gating mechanism that lets the network control what information passes through. Despite the extra parameters, it performs better at equivalent compute, so the intermediate dimension is adjusted down to compensate. This is a case where a slightly more complex component improves the whole system.

Knowledge Storage

Research suggests that FFN layers function like key-value memories: the first linear layer (W1) detects patterns in the input (keys), and the second linear layer (W2) maps those patterns to output updates (values). "The Eiffel Tower is in" activates specific neurons in W1, which through W2 promote the token "Paris." This key-value interpretation explains why FFN layers store factual knowledge and why knowledge editing techniques can modify specific facts by updating specific FFN weights.

相关概念

← 所有术语
← Federated 学习ing Few-Shot 学习ing →