Zubnet AI学习Wiki › Self-Attention
基础

Self-Attention

Scaled Dot-Product Attention
一种注意力机制,序列对自己进行注意 — 每个 token 计算自己与同一序列中每个其他 token 的相关性。queries、keys 和 values 都来自同一个输入。这让每个 token 能从所有其他 token 中收集信息,按相关性加权。Self-attention 是每个 Transformer 层的核心操作。

为什么重要

Self-attention 是让 Transformer 工作的原因。它用所有位置之间的并行、直接连接,取代了 RNN 的顺序处理。“river bank” 中的 “bank” 可以attend到 “river” 来解析它的意思,不管它们距离多远。这种直接连接任意两个位置的能力,就是为什么 Transformer 能那么好地处理长距离依赖。

Deep Dive

The computation: for input X, compute Q = X·W_Q, K = X·W_K, V = X·W_V. Then: Attention(Q,K,V) = softmax(Q·K^T / √d_k) · V. The softmax(Q·K^T) produces an N×N attention matrix where entry (i,j) represents how much token i attends to token j. The √d_k scaling prevents dot products from growing too large in high dimensions, which would push softmax into saturated regions with near-zero gradients.

Causal vs. Bidirectional

In decoder-only LLMs (GPT, Claude, Llama), self-attention is causal: each token can only attend to previous tokens (including itself). This is enforced by a causal mask that sets future positions to −∞ before softmax. In encoder models (BERT), self-attention is bidirectional: every token attends to every other token. The causal constraint is what makes autoregressive generation possible — the model can't "peek" at future tokens.

The Quadratic Cost

Self-attention computes an N×N attention matrix, making it O(N²) in both time and memory. For a 128K token context, that's ~16 billion entries per layer per head. This quadratic scaling is the fundamental limitation that drives research into sparse attention, linear attention, Flash Attention (which reduces memory but not compute), and SSMs (which avoid the N×N matrix entirely). Every approach to long-context modeling is ultimately about managing this quadratic cost.

相关概念

← 所有术语
← Scaling Laws Self-Supervised 学习ing →