Zubnet AIAprenderWiki › Distillation
Training

Distillation

Knowledge Distillation, Model Distillation
Treinar um modelo “estudante” menor para imitar o comportamento de um modelo “professor” maior. Em vez de treinar o estudante com dados brutos e rótulos duros (gato/cachorro), você o treina com as distribuições de probabilidade suaves do professor (70% gato, 20% cachorro, 10% raposa). As saídas suaves carregam mais informação que os rótulos duros porque codificam a incerteza do professor e as relações entre categorias.

Por que importa

Destilação é como a indústria torna IA poderosa acessível. Um modelo de 70 bilhões de parâmetros pode ser grande e caro demais para aplicações em tempo real, mas um modelo 7B destilado dele pode capturar 90% da capacidade a 10% do custo. Muitos dos modelos pequenos e rápidos que as pessoas rodam localmente são destilados de modelos frontier maiores.

Deep Dive

The original insight from Hinton et al. (2015) was that a teacher's output probabilities contain "dark knowledge" — information about which wrong answers are almost right. A digit classifier that sees a "7" might output 0.8 for "7" but 0.15 for "1" and 0.03 for "9" — revealing that 7s look more like 1s than 9s. A student trained on these soft targets learns these relationships, which hard labels ("it's a 7, period") don't convey.

In the LLM Era

For LLMs, distillation takes several forms. The most common is training a smaller model on outputs generated by a larger model — you run the teacher on a large set of prompts, collect its responses, and fine-tune the student on those (prompt, response) pairs. This is sometimes called "distillation through generation." It's controversial because some model licenses prohibit using outputs to train competing models, and because it can create models that sound confident but lack the teacher's deeper reasoning abilities.

Distillation vs. Quantization

People sometimes confuse distillation with quantization. Quantization shrinks a model by reducing numerical precision (32-bit to 4-bit) — same model, smaller numbers. Distillation creates an entirely new, architecturally smaller model — fewer layers, smaller dimensions — that has learned from the teacher. They're complementary: you can distill a 70B model into a 7B model and then quantize the 7B model to make it even smaller.

Conceitos relacionados

← Todos os termos
← Diffusion Transformer Distributed Training →