The process: collect thousands to millions of (instruction, ideal response) pairs covering diverse tasks — Q&A, summarization, coding, creative writing, math, conversation. Fine-tune the base model on these pairs using standard supervised learning (minimize the loss on the response tokens given the instruction). The model learns the meta-pattern: "when given an instruction, produce a helpful response."
Instruction tuning (Supervised Fine-Tuning / SFT) is typically the first post-training step, followed by alignment via RLHF or DPO. SFT teaches the model the format and basic helpfulness. RLHF/DPO then refines the behavior — making responses more helpful, less harmful, and better calibrated. Some approaches (like ORPO) combine SFT and preference alignment into a single step.
Research consistently shows that a small set of high-quality instruction-response pairs outperforms a large set of low-quality ones. The LIMA paper (Zhou et al., 2023) showed that fine-tuning with just 1,000 carefully curated examples could produce surprisingly good results. The key is diversity (covering many task types) and quality (responses that are genuinely excellent, not just adequate). This is why instruction data curation has become a specialized discipline.