Zubnet AILearnWiki › GQA
Fundamentals

GQA

Grouped Query Attention
An attention variant where multiple query heads share a single key-value head, reducing the KV cache size without significantly reducing quality. Instead of every query head having its own K and V projections (standard MHA), groups of query heads share K and V projections. Llama 2 70B, Mistral, Gemma, and most modern LLMs use GQA.

Why it matters

GQA is the practical solution to the KV cache memory problem. Standard multi-head attention with 64 heads needs 64 sets of K and V tensors per layer in the cache. GQA with 8 KV heads reduces this to 8 sets — an 8x memory reduction. This directly translates to serving more concurrent users or handling longer contexts on the same hardware.

Deep Dive

The spectrum: Multi-Head Attention (MHA) has equal numbers of Q, K, V heads — maximum quality, maximum memory. Multi-Query Attention (MQA) has many Q heads but only one K and one V head — minimum memory, some quality loss. GQA is the middle ground: divide Q heads into groups, each group sharing one K and one V head. A model with 32 Q heads and 8 KV groups has each KV head serving 4 Q heads.

Quality vs. Memory

Research shows that GQA with 8 KV heads matches MHA quality for most tasks while using 4–8x less KV cache memory. The quality preservation is somewhat surprising: it suggests that many attention heads are learning similar key-value patterns, so sharing them is efficient rather than limiting. Converting an existing MHA model to GQA through "uptraining" (a short fine-tuning phase) is also effective, avoiding the need to retrain from scratch.

Impact on Inference

The KV cache memory savings from GQA directly translate to: longer context windows on the same GPU, more concurrent requests (higher throughput), and faster attention computation (fewer K and V tensors to read). For a 70B model at 128K context, the difference between MHA and GQA can be hundreds of gigabytes of KV cache — the difference between needing 8 GPUs and needing 4.

Related Concepts

← All Terms
← GPU Gradient Checkpointing →