Zubnet AI学习Wiki › Perplexity (Metric)
基础

Perplexity (Metric)

PPL
语言模型预测文本多好的度量。技术上,它是平均交叉熵损失的指数。直观地,它表示模型在每一步“在多少个 token 之间选择”。Perplexity 为 10 意味着模型的不确定度就像在 10 个等概率选项中随机选。Perplexity 越低,预测越好。

为什么重要

Perplexity 是比较语言模型原始文本建模能力最基本的指标。它在模型训练中从未见过的 held-out 文本上计算。当研究者说“我们在 WikiText-103 上达到更低 perplexity”时,他们意思是他们的模型更会预测自然文本。但光靠 perplexity 不能告诉你一个模型是否有用、安全、或善于跟随指令 — 那是 benchmark 和人工评估的用途。

Deep Dive

The formula: PPL = exp(−(1/N) ∑ log P(token_i | context_i)), where N is the number of tokens and P is the model's predicted probability for each actual token. If the model assigns high probability to every correct token, the sum of log probabilities is close to zero, and PPL approaches 1 (perfect). If the model is surprised by many tokens, the sum is a large negative number, and PPL is high.

Comparing Perplexities

You can only meaningfully compare perplexities between models that use the same tokenizer, or that are evaluated on the same text. A model with a larger vocabulary might have lower perplexity simply because it has more fine-grained tokens to assign probability to. Evaluation datasets matter too — perplexity on Wikipedia (clean, well-structured text) will be much lower than perplexity on Reddit (noisy, informal). Always check what tokenizer and evaluation set were used.

The Gap Between PPL and Usefulness

A model can have excellent perplexity but be terrible as an assistant. Pre-trained base models (before RLHF/DPO) typically have lower perplexity than their aligned counterparts, because alignment training optimizes for helpfulness rather than raw prediction accuracy. The aligned model might assign lower probability to the statistically most likely next token if that token would produce an unhelpful or unsafe response. This is a feature, not a bug — but it means perplexity is a measure of text modeling, not utility.

相关概念

← 所有术语
← Perplexity PixVerse →