K-means works by: (1) randomly initializing K cluster centers, (2) assigning each data point to the nearest center, (3) moving each center to the mean of its assigned points, (4) repeating steps 2–3 until convergence. The main challenge: choosing K. The "elbow method" (plot loss vs. K and find the bend) and silhouette scores are common heuristics, but the right number of clusters often requires domain knowledge.
DBSCAN discovers clusters of arbitrary shapes (K-means assumes spherical clusters) and automatically identifies outliers as noise points. Hierarchical clustering builds a tree of nested clusters that you can cut at any level. Gaussian Mixture Models (GMMs) model clusters as probability distributions, allowing soft assignments (a point can partially belong to multiple clusters). Each method has strengths for different data geometries and use cases.
Combining embeddings with clustering is powerful for text analysis. Embed a collection of documents using a sentence embedding model, then cluster the embeddings. Each cluster represents a semantic group — topics, themes, or categories that emerge from the data. This is used for: organizing support tickets by topic, discovering themes in survey responses, grouping similar products, and topic modeling (a modern alternative to LDA). The clusters can then be labeled by asking an LLM to summarize what each cluster is about.