Instruction following is trained through instruction tuning (SFT on instruction-response pairs) and refined through RLHF/DPO (learning to prefer responses that accurately follow instructions). The quality of instruction-following depends heavily on the diversity and precision of the training data: models that see many examples of "exactly 3 items" learn to count; models that only see vague instructions don't.
Common instruction-following failures: ignoring length constraints ("be brief" → still writes paragraphs), format drift (starting with the requested format but reverting to prose), constraint amnesia (following the first constraint but forgetting later ones in a complex instruction), and over-following (interpreting ambiguous instructions too literally or too broadly). These failures are more common in smaller models and become rarer with scale, but even frontier models occasionally miss constraints.
Instruction following becomes complex when instructions conflict: the system prompt says "always respond in JSON" but the user says "write me a poem." Most models implement an instruction hierarchy where system-level instructions take precedence over user messages, but the boundaries are fuzzy. Well-designed applications structure their instruction hierarchy clearly and test edge cases where different levels of instructions might conflict.