Zubnet AIAprenderWiki › Classification
Fundamentos

Classification

Classifier, Categorization
La tarea de asignar una entrada a una de un conjunto predefinido de categorías. «¿Este email es spam o no?» (clasificación binaria). «¿Esta imagen es un gato, perro o pájaro?» (multi-clase). «¿Cuáles de estos tags aplican a este artículo?» (multi-etiqueta). La clasificación es la tarea de aprendizaje supervisado más común y el fundamento de innumerables aplicaciones IA del mundo real.

Por qué importa

La clasificación es donde la mayoría de la gente encuentra primero el machine learning en la práctica — filtros antispam, moderación de contenido, diagnóstico médico, detección de fraude, análisis de sentimiento. Entender la clasificación te ayuda a entender todo el pipeline de aprendizaje supervisado: datos etiquetados entran, modelo entrenado, predicciones salen.

Deep Dive

A classifier outputs a probability distribution over classes. For binary classification, a single number between 0 and 1 suffices (the probability of the positive class). For multi-class, the model outputs a probability for each class, typically using a softmax function to ensure they sum to 1. The predicted class is usually the one with the highest probability, but you can adjust the decision threshold based on your tolerance for false positives vs. false negatives.

LLMs as Classifiers

Modern LLMs are surprisingly good classifiers. Instead of training a dedicated model, you can prompt an LLM: "Classify this customer review as positive, negative, or neutral." For many classification tasks, this zero-shot approach matches or exceeds purpose-built classifiers, especially when the task requires understanding nuance or context. The trade-off is cost and latency — an LLM API call is much more expensive than running a small classifier locally.

Metrics That Matter

Accuracy (percent correct) is the most intuitive metric but can be misleading. If 99% of emails are not spam, a model that always predicts "not spam" gets 99% accuracy but catches zero spam. Precision (of predicted positives, how many are correct), recall (of actual positives, how many were found), and F1 (harmonic mean of precision and recall) give a more complete picture. The right metric depends on the cost of errors in your specific application.

Conceptos relacionados

← Todos los términos
← Checkpoint CLIP →