Zubnet AILearnWiki › Supervised Learning
Training

Supervised Learning

A training approach where the model learns from labeled examples — input-output pairs where the correct answer is provided. "Here's an image of a cat, the label is 'cat'. Here's an image of a dog, the label is 'dog'." The model adjusts its parameters to minimize the difference between its predictions and the known correct answers.

Why it matters

Supervised learning is the most intuitive form of machine learning and still the workhorse behind most practical applications: spam filters, medical image analysis, fraud detection, and the fine-tuning phase of LLMs. When you have labeled data and a clear target, supervised learning is usually where you start.

Deep Dive

The core loop of supervised learning is: make a prediction, compare it to the label, compute a loss (how wrong you were), and adjust parameters to reduce that loss. This cycle repeats millions or billions of times during training. The math behind the adjustment is gradient descent — computing how much each parameter contributed to the error and nudging it in the direction that reduces the error.

It's Everywhere in LLMs

Pre-training an LLM is technically a form of self-supervised learning (the labels are generated from the data itself — the "label" for each position is just the next token in the text). But fine-tuning and RLHF both use supervised signals: human-written example responses, or human preference rankings between model outputs. When you fine-tune a model on customer support conversations, you're doing supervised learning with the support agent's responses as labels.

The Data Bottleneck

The catch with supervised learning is that you need labeled data, and labels are expensive. Every medical image needs a radiologist to annotate it. Every support conversation needs a quality rating. This is why techniques like self-supervised learning (letting the model generate its own labels from unlabeled data) and semi-supervised learning (using a small labeled set to bootstrap labels for a larger unlabeled set) are so important — they reduce the labeling bottleneck that limits pure supervised approaches.

Related Concepts

← All Terms
← Superposition SwiGLU →