Zubnet AI学习Wiki › Flash Attention
基础设施

Flash Attention

FlashAttention, FlashAttention-2
注意力机制的 GPU 优化实现,比标准注意力快 2–4 倍,使用的内存显著更少。Flash Attention 实现这一点不是通过改变注意力计算什么,而是重新组织 GPU 硬件上的计算方式 — 最小化 GPU HBM 和片上 SRAM 之间的慢速内存传输。

为什么重要

Flash Attention 可以说是现代 AI 最具影响力的系统优化。它让长上下文模型变得实用,把注意力的内存使用从平方级降到(实际上)近乎线性,直接促成从 4K 到 128K+ 上下文窗口的跳跃。每个主要 LLM 都用它。没有 Flash Attention,今天的长上下文模型会贵得不可行。

Deep Dive

The key insight (Dao et al., 2022): standard attention materializes the full N×N attention matrix in GPU HBM (high bandwidth memory), which is both memory-intensive (quadratic in sequence length) and slow (HBM bandwidth is the bottleneck). Flash Attention never materializes this matrix. Instead, it computes attention in tiles, loading small blocks of Q, K, V into fast on-chip SRAM, computing partial results, and accumulating them — a technique called "tiling" or "kernel fusion."

The Memory Savings

Standard attention stores the N×N attention matrix, requiring O(N²) memory. For a 128K context with 128 attention heads, that's hundreds of gigabytes. Flash Attention uses O(N) memory by computing softmax incrementally and never storing the full matrix. This is what made 128K–1M context windows feasible on existing hardware. FlashAttention-2 further improved throughput by better parallelizing across GPU thread blocks.

IO-Aware Algorithm Design

Flash Attention exemplifies a broader principle: on modern hardware, the bottleneck is often memory bandwidth, not compute. GPUs can perform trillions of operations per second but can only read/write hundreds of gigabytes per second from HBM. Algorithms that minimize memory traffic (even at the cost of extra computation) often win. This "IO-aware" approach is influencing how the entire field thinks about algorithm design for AI workloads.

相关概念

← 所有术语
← Fine-tuning FLOPs →