Data drift detection: compare the statistical distribution of current inputs to the training data distribution. If features shift significantly (using tests like KS test, PSI, or Jensen-Shannon divergence), the model may be operating outside its training distribution. Example: a credit scoring model trained on applicants aged 25–55 starts receiving applications from 18-year-olds — a population it's never seen.
Concept drift is harder to detect because the inputs look the same but the correct outputs change. During COVID, "normal" purchase patterns shifted dramatically — buying 100 rolls of toilet paper went from "probable fraud" to "Tuesday." The model's predictions became wrong not because the model degraded, but because reality changed. Detecting concept drift requires comparing predictions to ground truth, which often arrives with a delay.
LLM drift manifests differently: user query patterns shift (new topics emerge), provider model updates change behavior (API model versions change silently), and the world changes (outdated training data). Monitoring strategies include: tracking output quality scores over time, detecting shifts in topic distribution of queries, alerting on increases in user-reported issues, and periodically re-evaluating on a fixed benchmark to detect provider-side changes.