Zubnet AI学习Wiki › Human Evaluation
基础

Human Evaluation

Human Eval, Manual Evaluation
让人类直接判断 AI 输出质量来评估。人类评估流畅度、准确性、有用性、安全性、以及输出是否真的满足请求。尽管又贵又慢,人工评估仍是黄金标准,因为自动化指标常常错过对用户真正重要的东西。

为什么重要

每个自动化指标都是人类判断的代理,每个代理都有盲点。BLEU 不能检测事实错误。Perplexity 不能衡量有用性。就算 LLM-as-judge 方法也继承偏见(比如偏好冗长回复)。当赌注高时 — 发布产品、比较模型版本、评估安全 — 人工评估无可替代。

Deep Dive

Human evaluation comes in several flavors: absolute rating (score this response 1–5 on helpfulness), pairwise comparison (which of these two responses is better?), and task-specific evaluation (did the model correctly extract all entities from this document?). Pairwise comparison is generally more reliable than absolute rating because humans are better at comparing than scoring — this is why Chatbot Arena uses pairwise voting.

The Cost Problem

Human evaluation is expensive: skilled annotators, clear guidelines, quality control, and statistical significance require time and money. Evaluating a model across diverse tasks might need thousands of human judgments. This is why automated metrics exist — they're free and instant. The practical approach is to use automated metrics for rapid iteration during development and human evaluation for milestone decisions (release, A/B testing, safety audits).

LLM-as-Judge

A middle ground: use a strong LLM to evaluate a weaker model's outputs. This is cheaper than human evaluation and often correlates well with human judgments. But it has known biases: LLM judges tend to prefer longer responses, more formatted responses, and responses that match their own style. Using multiple judge models and calibrating against human ratings helps, but LLM-as-judge should complement, not replace, human evaluation for important decisions.

相关概念

← 所有术语
← Hugging Face Hume →