Zubnet AILearnWiki › Instruction Following
Using AI

Instruction Following

Instruction Adherence
A model's ability to accurately execute what the user asks for — respecting format constraints, length requirements, style specifications, and behavioral instructions. "Write exactly 3 bullet points in French about X" tests instruction following: the response must be bullets (not paragraphs), exactly 3 (not 2 or 5), in French (not English), and about X (not Y).

Why it matters

Instruction following is the most practically important LLM capability. Users care less about whether a model "knows" more facts and more about whether it does what they actually asked. A model that writes beautiful prose but ignores your format requirements is less useful than one that reliably follows instructions. This is why IFEval and other instruction-following benchmarks have become central to model evaluation.

Deep Dive

Instruction following is trained through instruction tuning (SFT on instruction-response pairs) and refined through RLHF/DPO (learning to prefer responses that accurately follow instructions). The quality of instruction-following depends heavily on the diversity and precision of the training data: models that see many examples of "exactly 3 items" learn to count; models that only see vague instructions don't.

Where Models Fail

Common instruction-following failures: ignoring length constraints ("be brief" → still writes paragraphs), format drift (starting with the requested format but reverting to prose), constraint amnesia (following the first constraint but forgetting later ones in a complex instruction), and over-following (interpreting ambiguous instructions too literally or too broadly). These failures are more common in smaller models and become rarer with scale, but even frontier models occasionally miss constraints.

System Prompts and Hierarchy

Instruction following becomes complex when instructions conflict: the system prompt says "always respond in JSON" but the user says "write me a poem." Most models implement an instruction hierarchy where system-level instructions take precedence over user messages, but the boundaries are fuzzy. Well-designed applications structure their instruction hierarchy clearly and test edge cases where different levels of instructions might conflict.

Related Concepts

← All Terms
← Inference Instruction Tuning →