Zubnet AIसीखेंWiki › Classification
मूल सिद्धांत

Classification

Classifier, Categorization
एक input को categories के predefined set में से किसी एक को assign करने की task। “क्या ये email spam है या नहीं?” (binary classification)। “क्या ये image एक cat, dog, या bird है?” (multi-class)। “इन tags में से कौन से इस article पर apply होते हैं?” (multi-label)। Classification सबसे common supervised learning task है और countless real-world AI applications की foundation है।

यह क्यों matter करता है

Classification वो जगह है जहाँ अधिकांश लोग पहली बार practice में machine learning से मिलते हैं — spam filters, content moderation, medical diagnosis, fraud detection, sentiment analysis। Classification समझना आपको पूरा supervised learning pipeline समझने में help करता है: labeled data in, trained model, predictions out।

Deep Dive

A classifier outputs a probability distribution over classes. For binary classification, a single number between 0 and 1 suffices (the probability of the positive class). For multi-class, the model outputs a probability for each class, typically using a softmax function to ensure they sum to 1. The predicted class is usually the one with the highest probability, but you can adjust the decision threshold based on your tolerance for false positives vs. false negatives.

LLMs as Classifiers

Modern LLMs are surprisingly good classifiers. Instead of training a dedicated model, you can prompt an LLM: "Classify this customer review as positive, negative, or neutral." For many classification tasks, this zero-shot approach matches or exceeds purpose-built classifiers, especially when the task requires understanding nuance or context. The trade-off is cost and latency — an LLM API call is much more expensive than running a small classifier locally.

Metrics That Matter

Accuracy (percent correct) is the most intuitive metric but can be misleading. If 99% of emails are not spam, a model that always predicts "not spam" gets 99% accuracy but catches zero spam. Precision (of predicted positives, how many are correct), recall (of actual positives, how many were found), and F1 (harmonic mean of precision and recall) give a more complete picture. The right metric depends on the cost of errors in your specific application.

संबंधित अवधारणाएँ

← सभी Terms
← Checkpoint CLIP →