Zubnet AI学习Wiki › Instruction Following
Using AI

Instruction Following

Instruction Adherence
模型准确执行用户请求的能力 — 尊重格式约束、长度要求、风格规范和行为指令。“用法语写恰好 3 个项目列表,关于 X”测试 instruction following:响应必须是 bullet(不是段落)、恰好 3 个(不是 2 个或 5 个)、法语(不是英语)、关于 X(不是 Y)。

为什么重要

Instruction following 是 LLM 最具实用意义的能力。用户不太关心模型“知道”多少事实,更关心它是不是做了他们真正要求的事。一个写出漂亮散文但忽视你格式要求的模型,不如一个可靠跟随指令的模型有用。这就是 IFEval 和其他 instruction following 基准成为模型评估核心的原因。

Deep Dive

Instruction following is trained through instruction tuning (SFT on instruction-response pairs) and refined through RLHF/DPO (learning to prefer responses that accurately follow instructions). The quality of instruction-following depends heavily on the diversity and precision of the training data: models that see many examples of "exactly 3 items" learn to count; models that only see vague instructions don't.

Where Models Fail

Common instruction-following failures: ignoring length constraints ("be brief" → still writes paragraphs), format drift (starting with the requested format but reverting to prose), constraint amnesia (following the first constraint but forgetting later ones in a complex instruction), and over-following (interpreting ambiguous instructions too literally or too broadly). These failures are more common in smaller models and become rarer with scale, but even frontier models occasionally miss constraints.

System Prompts and Hierarchy

Instruction following becomes complex when instructions conflict: the system prompt says "always respond in JSON" but the user says "write me a poem." Most models implement an instruction hierarchy where system-level instructions take precedence over user messages, but the boundaries are fuzzy. Well-designed applications structure their instruction hierarchy clearly and test edge cases where different levels of instructions might conflict.

相关概念

← 所有术语
← Inference Instruction Tuning →