Zubnet AI学习Wiki › AI Observability
基础设施

AI Observability

LLM Monitoring, AI Tracing, LLMOps
实时监控和理解生产中 AI 系统的行为 — 跟踪输入、输出、延迟、成本、错误、质量指标。AI 可观测性像应用监控(Datadog、New Relic)但专为 AI 特化:追踪 prompt-响应对、检测质量退化、监控幻觉、对异常行为告警。

为什么重要

没有可观测性部署 AI 系统就是盲飞。你不知道模型是不是比平常更多幻觉、延迟是不是在爬升、某种查询是不是在失败、成本是不是在飙升。AI 可观测性把“看起来能工作”变成“我们知道它工作,我们知道它什么时候不工作”。这是 demo 和生产系统的区别。

Deep Dive

Core observability signals for AI: request/response logs (what did users ask, what did the model respond), latency metrics (TTFT, tokens per second, total response time), cost tracking (tokens consumed, API spend), quality metrics (user feedback, automated quality scores), error rates (API failures, rate limits, content filter triggers), and safety metrics (refusal rates, flagged content, prompt injection attempts).

Tracing

For complex AI applications (RAG pipelines, multi-agent systems), tracing follows a request through every step: the user query, the retrieval results, the prompt construction, the model call, the post-processing, and the final response. Each step is logged with inputs, outputs, latency, and cost. When something goes wrong, traces let you identify exactly where in the pipeline the failure occurred. LangSmith, Langfuse, and Braintrust provide LLM-specific tracing.

Quality Monitoring

The hardest part of AI observability: automatically detecting when output quality degrades. Approaches include: LLM-as-judge (use a model to score outputs), embedding drift detection (if the distribution of outputs changes significantly, something may be wrong), user feedback signals (thumbs up/down, regeneration rates), and regression testing (periodically run a golden set of queries and compare outputs to baselines). No single approach catches everything — production systems use multiple signals.

相关概念

← 所有术语
ESC