Zubnet AIAprenderWiki › Perplexity (Metric)
Fundamentos

Perplexity (Metric)

PPL
Una medición de qué tan bien un modelo de lenguaje predice texto. Técnicamente, es el exponencial de la pérdida de cross-entropy promedio. Intuitivamente, representa «entre cuántos tokens el modelo está eligiendo» en cada paso. Una perplexity de 10 significa que el modelo está tan incierto como si eligiera aleatoriamente entre 10 opciones igualmente probables. Menor perplexity significa mejores predicciones.

Por qué importa

La perplexity es la métrica más fundamental para comparar la capacidad de modelado de texto crudo de modelos de lenguaje. Se computa en texto held-out que el modelo nunca vio durante entrenamiento. Cuando investigadores dicen «logramos menor perplexity en WikiText-103», quieren decir que su modelo es mejor prediciendo texto natural. Pero la perplexity sola no te dice si un modelo es útil, seguro o bueno siguiendo instrucciones — para eso están los benchmarks y evaluación humana.

Deep Dive

The formula: PPL = exp(−(1/N) ∑ log P(token_i | context_i)), where N is the number of tokens and P is the model's predicted probability for each actual token. If the model assigns high probability to every correct token, the sum of log probabilities is close to zero, and PPL approaches 1 (perfect). If the model is surprised by many tokens, the sum is a large negative number, and PPL is high.

Comparing Perplexities

You can only meaningfully compare perplexities between models that use the same tokenizer, or that are evaluated on the same text. A model with a larger vocabulary might have lower perplexity simply because it has more fine-grained tokens to assign probability to. Evaluation datasets matter too — perplexity on Wikipedia (clean, well-structured text) will be much lower than perplexity on Reddit (noisy, informal). Always check what tokenizer and evaluation set were used.

The Gap Between PPL and Usefulness

A model can have excellent perplexity but be terrible as an assistant. Pre-trained base models (before RLHF/DPO) typically have lower perplexity than their aligned counterparts, because alignment training optimizes for helpfulness rather than raw prediction accuracy. The aligned model might assign lower probability to the statistically most likely next token if that token would produce an unhelpful or unsafe response. This is a feature, not a bug — but it means perplexity is a measure of text modeling, not utility.

Conceptos relacionados

← Todos los términos
← Perplexity PixVerse →