Zubnet AIAprenderWiki › Flash Attention
Infraestructura

Flash Attention

FlashAttention, FlashAttention-2
Una implementación optimizada para GPU del mecanismo de atención que es 2–4x más rápida y usa significativamente menos memoria que la atención estándar. Flash Attention logra esto no cambiando lo que la atención computa, sino reestructurando cómo se realiza la computación en hardware GPU — minimizando transferencias lentas de memoria entre HBM del GPU y SRAM on-chip.

Por qué importa

Flash Attention es posiblemente la optimización de sistemas más impactante en IA moderna. Hizo prácticos los modelos long-context al reducir el uso de memoria de la atención de cuadrático a casi lineal (en la práctica), habilitando directamente el salto de ventanas de contexto de 4K a 128K+. Cada LLM mayor la usa. Sin Flash Attention, los modelos long-context de hoy serían prohibitivamente caros.

Deep Dive

The key insight (Dao et al., 2022): standard attention materializes the full N×N attention matrix in GPU HBM (high bandwidth memory), which is both memory-intensive (quadratic in sequence length) and slow (HBM bandwidth is the bottleneck). Flash Attention never materializes this matrix. Instead, it computes attention in tiles, loading small blocks of Q, K, V into fast on-chip SRAM, computing partial results, and accumulating them — a technique called "tiling" or "kernel fusion."

The Memory Savings

Standard attention stores the N×N attention matrix, requiring O(N²) memory. For a 128K context with 128 attention heads, that's hundreds of gigabytes. Flash Attention uses O(N) memory by computing softmax incrementally and never storing the full matrix. This is what made 128K–1M context windows feasible on existing hardware. FlashAttention-2 further improved throughput by better parallelizing across GPU thread blocks.

IO-Aware Algorithm Design

Flash Attention exemplifies a broader principle: on modern hardware, the bottleneck is often memory bandwidth, not compute. GPUs can perform trillions of operations per second but can only read/write hundreds of gigabytes per second from HBM. Algorithms that minimize memory traffic (even at the cost of extra computation) often win. This "IO-aware" approach is influencing how the entire field thinks about algorithm design for AI workloads.

Conceptos relacionados

← Todos los términos
← Fine-tuning FLOPs →