Zubnet AI学习Wiki › KV Cache
基础设施

KV Cache

Key-Value Cache
一种内存优化,把 attention 机制先前计算过的 key 和 value tensor 存起来,这样每个新 token 就不用重算。在自回归生成过程中,每个新 token 都要 attend 到所有前面的 token。没有缓存,你就得在每一步对整个序列重算 attention。KV cache 以内存换速度,把已经算过的存起来。

为什么重要

KV cache 就是 LLM 推理受内存限制而不是受计算限制的原因。一段和 Claude 的长对话占的内存不只是模型权重 — 一个 100K token 上下文的 KV cache 可能吃掉数十 GB 的 VRAM。这就是为什么供应商为更长的上下文多收钱、为什么“上下文窗口”有个实际的天花板在理论极限之下、以及为什么 paged attention 和缓存驱逐这类技术是活跃的研究方向。

Deep Dive

In a Transformer, the attention mechanism computes three matrices for each token: Query (Q), Key (K), and Value (V). The query of the current token is compared against the keys of all previous tokens to produce attention weights, which are then used to weight the values. During generation, the Q changes with each new token, but the K and V for all previous tokens stay the same. The KV cache stores these K and V matrices so they're computed once and reused.

The Memory Math

KV cache size = 2 (K and V) × num_layers × num_heads × head_dim × sequence_length × bytes_per_element. For a 70B model with 80 layers, 64 heads, head dimension 128, at FP16: that's 2 × 80 × 64 × 128 × 2 bytes = ~2.6 MB per token. A 100K context therefore needs ~256 GB of KV cache alone — more than the model weights themselves. This is the fundamental constraint on long-context inference.

Optimizations

Several techniques address KV cache pressure. Grouped Query Attention (GQA) shares key-value heads across multiple query heads, reducing cache size by 4–8x. Multi-Query Attention (MQA) goes further with a single KV head. PagedAttention (used by vLLM) manages cache memory like virtual memory pages, eliminating fragmentation. Sliding window attention limits how far back each token looks, capping cache growth. Quantizing the KV cache to FP8 or INT4 is another practical lever — some quality loss, but 2–4x memory savings.

相关概念

← 所有术语
← Knowledge Graph LangChain →