Zubnet AIAprenderWiki › GQA
Fundamentos

GQA

Grouped Query Attention
Una variante de atención donde múltiples cabezas de query comparten una sola cabeza key-value, reduciendo el tamaño del KV cache sin reducir significativamente la calidad. En vez de que cada cabeza de query tenga sus propias proyecciones K y V (MHA estándar), grupos de cabezas de query comparten proyecciones K y V. Llama 2 70B, Mistral, Gemma y la mayoría de LLMs modernos usan GQA.

Por qué importa

GQA es la solución práctica al problema de memoria del KV cache. La multi-head attention estándar con 64 cabezas necesita 64 conjuntos de tensors K y V por capa en el cache. GQA con 8 cabezas KV reduce esto a 8 conjuntos — una reducción de memoria 8x. Esto se traduce directamente en servir a más usuarios concurrentes o manejar contextos más largos en el mismo hardware.

Deep Dive

The spectrum: Multi-Head Attention (MHA) has equal numbers of Q, K, V heads — maximum quality, maximum memory. Multi-Query Attention (MQA) has many Q heads but only one K and one V head — minimum memory, some quality loss. GQA is the middle ground: divide Q heads into groups, each group sharing one K and one V head. A model with 32 Q heads and 8 KV groups has each KV head serving 4 Q heads.

Quality vs. Memory

Research shows that GQA with 8 KV heads matches MHA quality for most tasks while using 4–8x less KV cache memory. The quality preservation is somewhat surprising: it suggests that many attention heads are learning similar key-value patterns, so sharing them is efficient rather than limiting. Converting an existing MHA model to GQA through "uptraining" (a short fine-tuning phase) is also effective, avoiding the need to retrain from scratch.

Impact on Inference

The KV cache memory savings from GQA directly translate to: longer context windows on the same GPU, more concurrent requests (higher throughput), and faster attention computation (fewer K and V tensors to read). For a 70B model at 128K context, the difference between MHA and GQA can be hundreds of gigabytes of KV cache — the difference between needing 8 GPUs and needing 4.

Conceptos relacionados

← Todos los términos
← GPU Gradient Checkpointing →