Zubnet AIAprenderWiki › Speculative Decoding
Infraestrutura

Speculative Decoding

Assisted Generation, Draft-and-Verify
Uma otimização de velocidade onde um modelo “draft” pequeno e rápido gera vários tokens candidatos, e então o modelo target grande verifica todos em uma única forward pass. Se o draft model adivinhou corretamente (o que acontece frequentemente para tokens previsíveis), múltiplos tokens são aceitos de uma vez, pulando a geração lenta token-por-token do modelo grande. Quando o draft está errado, o modelo grande corrige a partir desse ponto.

Por que importa

Speculative decoding pode acelerar a inferência LLM em 2–3x sem qualquer perda de qualidade de saída — a saída final é matematicamente idêntica ao que o modelo grande teria produzido sozinho. É um dos poucos almoços grátis em otimização de inferência IA, é por isso que está sendo amplamente adotado por provedores e frameworks.

Deep Dive

The key insight is that verifying a draft is much faster than generating from scratch. During normal autoregressive generation, each token requires a full serial forward pass through the model. But the model can process multiple tokens in parallel during a single forward pass (like it does with your prompt). So if you have a draft of 5 tokens, the large model can check all 5 in roughly the time it would take to generate 1. If 4 out of 5 are correct, you've generated 4 tokens for the cost of 1+1 (draft generation + verification).

Choosing the Draft Model

The draft model should be much smaller and faster than the target model, but similar enough to agree on most tokens. A common approach: use a model from the same family but smaller (Llama 70B verified by Llama 8B drafts). Some systems use the target model's own early layers as a draft model (self-speculative decoding). The acceptance rate — what fraction of draft tokens the target model agrees with — determines the speedup. Typical acceptance rates of 70–85% yield 2–3x throughput improvements.

When It Helps Most

Speculative decoding helps most when the text is predictable (boilerplate, code with common patterns, structured output) and helps least when every token is surprising (creative writing, complex reasoning). It also helps more when the bottleneck is latency rather than throughput — if you're serving many concurrent requests, the GPU is already busy and the parallelism gains are smaller.

Conceitos relacionados

← Todos os termos
← Sparse Autoencoder Speech Recognition →