Zubnet AIApprendreWiki › Clustering
Fondamentaux

Clustering

K-Means, DBSCAN, Cluster Analysis
Une tâche d'apprentissage non supervisé qui regroupe les points de données similaires ensemble sans étiquettes prédéfinies. Étant donné des données d'achats clients, le clustering pourrait découvrir des segments de clients distincts (chasseurs de bargains, acheteurs de luxe, acheteurs occasionnels). K-means est l'algorithme le plus commun : choisis K clusters, assigne chaque point au centre de cluster le plus proche, et raffine itérativement les centres.

Pourquoi c'est important

Le clustering est la tâche d'apprentissage non supervisé la plus commune et apparaît partout : segmentation de clients, groupement de documents, détection d'anomalies (outliers qui ne fittent aucun cluster), compression d'images (grouper les pixels similaires), et exploration de données (quels groupes naturels existent dans mes données ?). C'est souvent la première étape pour comprendre un nouveau dataset.

Deep Dive

K-means works by: (1) randomly initializing K cluster centers, (2) assigning each data point to the nearest center, (3) moving each center to the mean of its assigned points, (4) repeating steps 2–3 until convergence. The main challenge: choosing K. The "elbow method" (plot loss vs. K and find the bend) and silhouette scores are common heuristics, but the right number of clusters often requires domain knowledge.

Beyond K-Means

DBSCAN discovers clusters of arbitrary shapes (K-means assumes spherical clusters) and automatically identifies outliers as noise points. Hierarchical clustering builds a tree of nested clusters that you can cut at any level. Gaussian Mixture Models (GMMs) model clusters as probability distributions, allowing soft assignments (a point can partially belong to multiple clusters). Each method has strengths for different data geometries and use cases.

Clustering with Embeddings

Combining embeddings with clustering is powerful for text analysis. Embed a collection of documents using a sentence embedding model, then cluster the embeddings. Each cluster represents a semantic group — topics, themes, or categories that emerge from the data. This is used for: organizing support tickets by topic, discovering themes in survey responses, grouping similar products, and topic modeling (a modern alternative to LDA). The clusters can then be labeled by asking an LLM to summarize what each cluster is about.

Concepts liés

← Tous les termes
← CLIP CNN →