Zubnet AIAprenderWiki › Distillation
Training

Distillation

Knowledge Distillation, Model Distillation
Entrenar un modelo «estudiante» más pequeño para imitar el comportamiento de un modelo «profesor» más grande. En vez de entrenar al estudiante con datos crudos y etiquetas duras (gato/perro), lo entrenas con las distribuciones de probabilidad suaves del profesor (70% gato, 20% perro, 10% zorro). Las salidas suaves llevan más información que las etiquetas duras porque codifican la incertidumbre del profesor y las relaciones entre categorías.

Por qué importa

La destilación es cómo la industria hace accesible la IA poderosa. Un modelo de 70 mil millones de parámetros puede ser demasiado grande y caro para aplicaciones en tiempo real, pero un modelo 7B destilado de él puede capturar el 90% de la capacidad al 10% del costo. Muchos de los modelos pequeños y rápidos que la gente corre localmente son destilados de modelos frontier más grandes.

Deep Dive

The original insight from Hinton et al. (2015) was that a teacher's output probabilities contain "dark knowledge" — information about which wrong answers are almost right. A digit classifier that sees a "7" might output 0.8 for "7" but 0.15 for "1" and 0.03 for "9" — revealing that 7s look more like 1s than 9s. A student trained on these soft targets learns these relationships, which hard labels ("it's a 7, period") don't convey.

In the LLM Era

For LLMs, distillation takes several forms. The most common is training a smaller model on outputs generated by a larger model — you run the teacher on a large set of prompts, collect its responses, and fine-tune the student on those (prompt, response) pairs. This is sometimes called "distillation through generation." It's controversial because some model licenses prohibit using outputs to train competing models, and because it can create models that sound confident but lack the teacher's deeper reasoning abilities.

Distillation vs. Quantization

People sometimes confuse distillation with quantization. Quantization shrinks a model by reducing numerical precision (32-bit to 4-bit) — same model, smaller numbers. Distillation creates an entirely new, architecturally smaller model — fewer layers, smaller dimensions — that has learned from the teacher. They're complementary: you can distill a 70B model into a 7B model and then quantize the 7B model to make it even smaller.

Conceptos relacionados

← Todos los términos
← Diffusion Transformer Distributed Training →