Zubnet AIसीखेंWiki › Flash Attention
Infrastructure

Flash Attention

FlashAttention, FlashAttention-2
Attention mechanism का एक GPU-optimized implementation जो standard attention से 2–4x faster है और significantly कम memory use करता है। Flash Attention ये achieve करता है attention क्या compute करती है वो बदल कर नहीं, बल्कि GPU hardware पर computation कैसे perform होती है उसे restructure करके — GPU HBM और on-chip SRAM के बीच slow memory transfers को minimize करते हुए।

यह क्यों matter करता है

Flash Attention arguably modern AI में सबसे impactful systems optimization है। इसने attention के memory usage को quadratic से near-linear (practice में) reduce करके long-context models को practical बनाया, directly 4K से 128K+ context windows तक jump enable करते हुए। हर major LLM इसे use करता है। Flash Attention के बिना, आज के long-context models prohibitively expensive होते।

Deep Dive

The key insight (Dao et al., 2022): standard attention materializes the full N×N attention matrix in GPU HBM (high bandwidth memory), which is both memory-intensive (quadratic in sequence length) and slow (HBM bandwidth is the bottleneck). Flash Attention never materializes this matrix. Instead, it computes attention in tiles, loading small blocks of Q, K, V into fast on-chip SRAM, computing partial results, and accumulating them — a technique called "tiling" or "kernel fusion."

The Memory Savings

Standard attention stores the N×N attention matrix, requiring O(N²) memory. For a 128K context with 128 attention heads, that's hundreds of gigabytes. Flash Attention uses O(N) memory by computing softmax incrementally and never storing the full matrix. This is what made 128K–1M context windows feasible on existing hardware. FlashAttention-2 further improved throughput by better parallelizing across GPU thread blocks.

IO-Aware Algorithm Design

Flash Attention exemplifies a broader principle: on modern hardware, the bottleneck is often memory bandwidth, not compute. GPUs can perform trillions of operations per second but can only read/write hundreds of gigabytes per second from HBM. Algorithms that minimize memory traffic (even at the cost of extra computation) often win. This "IO-aware" approach is influencing how the entire field thinks about algorithm design for AI workloads.

संबंधित अवधारणाएँ

← सभी Terms
← Fine-tuning FLOPs →