Zubnet AI学习Wiki › Distillation
Training

Distillation

Knowledge Distillation, Model Distillation
训练一个更小的“学生”模型来模仿更大的“教师”模型的行为。不是用硬标签(猫/狗)的原始数据训练学生,而是用教师的软概率分布(70% 猫,20% 狗,10% 狐狸)来训练。软输出比硬标签承载更多信息,因为它们编码了教师的不确定性和类别之间的关系。

为什么重要

蒸馏是产业让强大 AI 变得可用的方法。一个 700 亿参数的模型对实时应用来说可能太大太贵,但从它蒸馏出的 7B 模型能以 10% 的成本捕获 90% 的能力。很多人在本地跑的小而快的模型都是从更大的前沿模型蒸馏出来的。

Deep Dive

The original insight from Hinton et al. (2015) was that a teacher's output probabilities contain "dark knowledge" — information about which wrong answers are almost right. A digit classifier that sees a "7" might output 0.8 for "7" but 0.15 for "1" and 0.03 for "9" — revealing that 7s look more like 1s than 9s. A student trained on these soft targets learns these relationships, which hard labels ("it's a 7, period") don't convey.

In the LLM Era

For LLMs, distillation takes several forms. The most common is training a smaller model on outputs generated by a larger model — you run the teacher on a large set of prompts, collect its responses, and fine-tune the student on those (prompt, response) pairs. This is sometimes called "distillation through generation." It's controversial because some model licenses prohibit using outputs to train competing models, and because it can create models that sound confident but lack the teacher's deeper reasoning abilities.

Distillation vs. Quantization

People sometimes confuse distillation with quantization. Quantization shrinks a model by reducing numerical precision (32-bit to 4-bit) — same model, smaller numbers. Distillation creates an entirely new, architecturally smaller model — fewer layers, smaller dimensions — that has learned from the teacher. They're complementary: you can distill a 70B model into a 7B model and then quantize the 7B model to make it even smaller.

相关概念

← 所有术语
← Diffusion Transformer Distributed Training →