Zubnet AI学习Wiki › Sparse Attention
Models

Sparse Attention

Local Attention, Sliding Window Attention
只处理 token 对子集而不是完整 N×N 注意力矩阵的注意力机制。滑动窗口注意力只关注附近的 token(在固定窗口内)。稀疏模式(如 Longformer 的局部+全局组合)让特定 token 关注一切,而大多数 token 局部关注。这些方法降低了注意力对长序列的平方级代价。

为什么重要

稀疏注意力是 Mistral、Mixtral 等高效模型如何在不付出密集注意力全部代价的情况下处理长序列。它是“关注一切”(昂贵但彻底)和“不关注任何远处”(便宜但有限)之间的实用折衷。理解稀疏注意力能帮你评估关于上下文长度的说法并预测质量退化可能发生的地方。

Deep Dive

Sliding window attention: each token attends only to tokens within a fixed window (e.g., 4096 tokens). Information from earlier tokens propagates through the layers — layer 1 sees 4096 tokens, layer 2 effectively sees 8192 (two windows worth), and by the final layer, information from the full sequence has had a chance to propagate. Mistral-7B uses a 4096-token sliding window across its 32 layers.

Hybrid Patterns

Longformer combines sliding window (local) attention with global attention on selected tokens (like [CLS] or user-defined positions). BigBird adds random attention connections on top of local and global patterns. These hybrid approaches let models handle 4K–16K tokens with subquadratic cost while maintaining the ability to connect distant tokens through global positions.

The Quality Trade-off

Sparse attention works well for many tasks but can degrade on tasks requiring precise long-range dependencies — referencing a specific fact from the beginning of a long document, maintaining consistency across a long conversation, or following complex instructions that span many tokens. Dense attention (full quadratic) with Flash Attention remains more robust for these cases, which is why most frontier models still use dense attention and rely on Flash Attention for efficiency rather than sparsity.

相关概念

← 所有术语
← Softmax Sparse Autoencoder →