Zubnet AI学习Wiki › Speculative Decoding
基础设施

Speculative Decoding

Assisted Generation, Draft-and-Verify
一种速度优化,一个小的、快的“草稿”模型生成几个候选 token,然后大的目标模型在单次前向传播中验证它们。如果草稿模型猜对了(对可预测的 token 经常对),多个 token 一次被接受,跳过大模型慢的逐 token 生成。草稿错了时,大模型从那个点开始纠正。

为什么重要

Speculative decoding 能让 LLM 推理加速 2–3 倍,输出质量不损失 — 最终输出和大模型独自会产出的在数学上完全相同。这是 AI 推理优化中少有的免费午餐,所以被供应商和框架广泛采用。

Deep Dive

The key insight is that verifying a draft is much faster than generating from scratch. During normal autoregressive generation, each token requires a full serial forward pass through the model. But the model can process multiple tokens in parallel during a single forward pass (like it does with your prompt). So if you have a draft of 5 tokens, the large model can check all 5 in roughly the time it would take to generate 1. If 4 out of 5 are correct, you've generated 4 tokens for the cost of 1+1 (draft generation + verification).

Choosing the Draft Model

The draft model should be much smaller and faster than the target model, but similar enough to agree on most tokens. A common approach: use a model from the same family but smaller (Llama 70B verified by Llama 8B drafts). Some systems use the target model's own early layers as a draft model (self-speculative decoding). The acceptance rate — what fraction of draft tokens the target model agrees with — determines the speedup. Typical acceptance rates of 70–85% yield 2–3x throughput improvements.

When It Helps Most

Speculative decoding helps most when the text is predictable (boilerplate, code with common patterns, structured output) and helps least when every token is surprising (creative writing, complex reasoning). It also helps more when the bottleneck is latency rather than throughput — if you're serving many concurrent requests, the GPU is already busy and the parallelism gains are smaller.

相关概念

← 所有术语
← Sparse Autoencoder Speech Recognition →