Zubnet AIAprenderWiki › Human Evaluation
Fundamentos

Human Evaluation

Human Eval, Manual Evaluation
Avaliar a qualidade de saída IA tendo humanos julgando diretamente. Humanos avaliam fluência, precisão, utilidade, segurança, e se a saída realmente atende ao pedido. Apesar de ser cara e lenta, avaliação humana continua sendo o padrão-ouro porque métricas automatizadas frequentemente perdem o que realmente importa aos usuários.

Por que importa

Toda métrica automatizada é um proxy para julgamento humano, e todo proxy tem pontos cegos. BLEU não consegue detectar erros factuais. Perplexity não consegue medir utilidade. Até abordagens LLM-as-judge herdam vieses (preferindo respostas verbosas, por exemplo). Quando as apostas são altas — lançar um produto, comparar versões de modelo, avaliar segurança — avaliação humana é insubstituível.

Deep Dive

Human evaluation comes in several flavors: absolute rating (score this response 1–5 on helpfulness), pairwise comparison (which of these two responses is better?), and task-specific evaluation (did the model correctly extract all entities from this document?). Pairwise comparison is generally more reliable than absolute rating because humans are better at comparing than scoring — this is why Chatbot Arena uses pairwise voting.

The Cost Problem

Human evaluation is expensive: skilled annotators, clear guidelines, quality control, and statistical significance require time and money. Evaluating a model across diverse tasks might need thousands of human judgments. This is why automated metrics exist — they're free and instant. The practical approach is to use automated metrics for rapid iteration during development and human evaluation for milestone decisions (release, A/B testing, safety audits).

LLM-as-Judge

A middle ground: use a strong LLM to evaluate a weaker model's outputs. This is cheaper than human evaluation and often correlates well with human judgments. But it has known biases: LLM judges tend to prefer longer responses, more formatted responses, and responses that match their own style. Using multiple judge models and calibrating against human ratings helps, but LLM-as-judge should complement, not replace, human evaluation for important decisions.

Conceitos relacionados

← Todos os termos
← Hugging Face Hume →