Zubnet AILearnWiki › Drift Detection
Infrastructure

Drift Detection

Data Drift, Model Drift, Concept Drift
Monitoring for changes in the data distribution or model behavior over time that could degrade performance. Data drift: the input data changes (customer demographics shift, new product categories appear). Concept drift: the relationship between inputs and correct outputs changes (what constitutes spam evolves). Model drift: the model's predictions gradually become less accurate even though the model itself hasn't changed.

Why it matters

Models are trained on historical data, but the world keeps changing. A fraud detection model trained in 2024 will miss 2025's new fraud patterns. A recommendation system trained on pre-pandemic behavior will make poor suggestions post-pandemic. Drift detection catches these degradations before they become costly — alerting you that the model needs retraining or updating.

Deep Dive

Data drift detection: compare the statistical distribution of current inputs to the training data distribution. If features shift significantly (using tests like KS test, PSI, or Jensen-Shannon divergence), the model may be operating outside its training distribution. Example: a credit scoring model trained on applicants aged 25–55 starts receiving applications from 18-year-olds — a population it's never seen.

Concept Drift

Concept drift is harder to detect because the inputs look the same but the correct outputs change. During COVID, "normal" purchase patterns shifted dramatically — buying 100 rolls of toilet paper went from "probable fraud" to "Tuesday." The model's predictions became wrong not because the model degraded, but because reality changed. Detecting concept drift requires comparing predictions to ground truth, which often arrives with a delay.

For LLMs

LLM drift manifests differently: user query patterns shift (new topics emerge), provider model updates change behavior (API model versions change silently), and the world changes (outdated training data). Monitoring strategies include: tracking output quality scores over time, detecting shifts in topic distribution of queries, alerting on increases in user-reported issues, and periodically re-evaluating on a fixed benchmark to detect provider-side changes.

Related Concepts

← All Terms
ESC