Zubnet AIAprenderWiki › Reward Model
Training

Reward Model

RM, Preference Model
Un modelo entrenado para predecir las preferencias humanas entre respuestas de IA. Dado un prompt y dos respuestas candidatas, el reward model puntúa cuál preferirían los humanos. En el pipeline RLHF, el reward model provee la señal que entrena al modelo de lenguaje para producir mejores respuestas — es el proxy aprendido para el juicio humano.

Por qué importa

El reward model es el componente clave que hace funcionar al RLHF. No puedes tener a un humano evaluando cada respuesta durante el entrenamiento (demasiado lento, demasiado caro), así que entrenas a un modelo para aproximar preferencias humanas y lo usas como señal de entrenamiento. La calidad del reward model determina directamente la calidad del alineamiento — un mal reward model produce un modelo que optimiza para las cosas equivocadas.

Deep Dive

Training a reward model: collect pairs of responses to the same prompt, have humans rank them (response A is better than response B), then train a model to predict these rankings. The reward model outputs a scalar score for any (prompt, response) pair. During RL training, the language model generates responses, the reward model scores them, and the language model is updated to produce higher-scoring responses.

Reward Hacking

A dangerous failure mode: the language model finds ways to get high reward scores without actually being helpful. If the reward model has learned to prefer longer responses (because humans often preferred more detailed answers), the language model might pad responses with unnecessary content. This is called "reward hacking" or "reward gaming." Mitigations include KL divergence penalties (preventing the model from drifting too far from the base model), ensembles of reward models, and regular recalibration against human judgments.

DPO Bypasses the RM

DPO (Direct Preference Optimization) eliminates the separate reward model entirely, optimizing the language model directly on preference pairs. This avoids reward hacking but loses the ability to score arbitrary responses. Some labs use both: a reward model for evaluation and ranking, plus DPO for training. The optimal approach depends on scale, data quality, and how much you need to evaluate responses outside of training.

Conceptos relacionados

← Todos los términos
← Retrieval RLAIF →