Zubnet AIAprenderWiki › GQA
Fundamentos

GQA

Grouped Query Attention
Uma variante de atenção onde múltiplas cabeças de query compartilham uma única cabeça key-value, reduzindo o tamanho do KV cache sem reduzir significativamente a qualidade. Em vez de cada cabeça de query ter suas próprias projeções K e V (MHA padrão), grupos de cabeças de query compartilham projeções K e V. Llama 2 70B, Mistral, Gemma e a maioria dos LLMs modernos usam GQA.

Por que importa

GQA é a solução prática para o problema de memória do KV cache. Multi-head attention padrão com 64 cabeças precisa de 64 conjuntos de tensors K e V por camada no cache. GQA com 8 cabeças KV reduz isso para 8 conjuntos — uma redução de memória 8x. Isso se traduz diretamente em servir mais usuários concorrentes ou lidar com contextos mais longos no mesmo hardware.

Deep Dive

The spectrum: Multi-Head Attention (MHA) has equal numbers of Q, K, V heads — maximum quality, maximum memory. Multi-Query Attention (MQA) has many Q heads but only one K and one V head — minimum memory, some quality loss. GQA is the middle ground: divide Q heads into groups, each group sharing one K and one V head. A model with 32 Q heads and 8 KV groups has each KV head serving 4 Q heads.

Quality vs. Memory

Research shows that GQA with 8 KV heads matches MHA quality for most tasks while using 4–8x less KV cache memory. The quality preservation is somewhat surprising: it suggests that many attention heads are learning similar key-value patterns, so sharing them is efficient rather than limiting. Converting an existing MHA model to GQA through "uptraining" (a short fine-tuning phase) is also effective, avoiding the need to retrain from scratch.

Impact on Inference

The KV cache memory savings from GQA directly translate to: longer context windows on the same GPU, more concurrent requests (higher throughput), and faster attention computation (fewer K and V tensors to read). For a 70B model at 128K context, the difference between MHA and GQA can be hundreds of gigabytes of KV cache — the difference between needing 8 GPUs and needing 4.

Conceitos relacionados

← Todos os termos
← GPU Gradient Checkpointing →