Zubnet AIसीखेंWiki › Speculative Decoding
Infrastructure

Speculative Decoding

Assisted Generation, Draft-and-Verify
एक speed optimization जहाँ एक small, fast “draft” model कई candidate tokens generate करता है, और फिर large target model उन सबको एक single forward pass में verify करता है। अगर draft model ने correctly guess किया (जो predictable tokens के लिए अक्सर होता है), multiple tokens एक बार में accept होते हैं, large model की slow token-by-token generation को skip करते हुए। जब draft गलत होता है, large model उस point से correct करता है।

यह क्यों matter करता है

Speculative decoding LLM inference को output quality में बिना किसी loss के 2–3x speed up कर सकती है — final output mathematically उसके identical है जो large model अकेला produce करता। ये AI inference optimization में few free lunches में से एक है, यही वजह है कि ये providers और frameworks द्वारा widely adopted है।

Deep Dive

The key insight is that verifying a draft is much faster than generating from scratch. During normal autoregressive generation, each token requires a full serial forward pass through the model. But the model can process multiple tokens in parallel during a single forward pass (like it does with your prompt). So if you have a draft of 5 tokens, the large model can check all 5 in roughly the time it would take to generate 1. If 4 out of 5 are correct, you've generated 4 tokens for the cost of 1+1 (draft generation + verification).

Choosing the Draft Model

The draft model should be much smaller and faster than the target model, but similar enough to agree on most tokens. A common approach: use a model from the same family but smaller (Llama 70B verified by Llama 8B drafts). Some systems use the target model's own early layers as a draft model (self-speculative decoding). The acceptance rate — what fraction of draft tokens the target model agrees with — determines the speedup. Typical acceptance rates of 70–85% yield 2–3x throughput improvements.

When It Helps Most

Speculative decoding helps most when the text is predictable (boilerplate, code with common patterns, structured output) and helps least when every token is surprising (creative writing, complex reasoning). It also helps more when the bottleneck is latency rather than throughput — if you're serving many concurrent requests, the GPU is already busy and the parallelism gains are smaller.

संबंधित अवधारणाएँ

← सभी Terms
← Sparse Autoencoder Speech Recognition →