Reinforcement learning is built on a deceptively simple loop: an agent observes the current state of an environment, takes an action, receives a reward (or penalty), and updates its strategy accordingly. Repeat this millions or billions of times, and the agent discovers behaviors that maximize cumulative reward. The mathematical framework — Markov Decision Processes, Bellman equations, policy gradients — has existed since the 1950s, but RL remained a niche academic pursuit until deep learning gave it the ability to handle complex, high-dimensional environments. DeepMind's Atari-playing agent in 2013 was the first mainstream demonstration: a neural network that learned to play dozens of video games from raw pixel input, with no game-specific programming, matching or exceeding human performance on many of them.
RL algorithms fall into two broad families. Value-based methods (like DQN and its descendants) learn to estimate how valuable each state or state-action pair is, then choose actions that lead to the highest-value states. Policy-based methods (like REINFORCE and PPO) directly learn a mapping from states to actions without explicitly estimating values. In practice, most modern RL systems use actor-critic methods that combine both: one network (the actor) decides what to do, and another (the critic) evaluates how good that decision was. Proximal Policy Optimization (PPO), developed by OpenAI in 2017, has become the workhorse algorithm for many applications because it's relatively stable to train and doesn't require careful hyperparameter tuning. Group Relative Policy Optimization (GRPO), used by DeepSeek for training their R1 reasoning model, eliminates the need for a separate critic network entirely, instead comparing multiple sampled outputs against each other to determine which ones deserve reinforcement.
AlphaGo's 2016 victory over world champion Lee Sedol in the game of Go was RL's watershed moment. Go has more possible board positions than atoms in the universe, making brute-force search impossible — the system had to develop genuine intuition about which moves were promising. AlphaGo combined supervised learning (training on human expert games) with RL (playing millions of games against itself to discover strategies no human had ever used). Its successor, AlphaZero, went further: it learned chess, Go, and shogi entirely from self-play, with no human game data at all, and surpassed all previous AI systems in each game within hours. This demonstrated that RL could discover superhuman strategies in domains where the rules are known and the reward signal is clear. The challenge has always been extending this success to messier, real-world domains where the reward is ambiguous and the environment is partially observable.
The most commercially important application of RL today is RLHF — Reinforcement Learning from Human Feedback — which is how language models learn to be helpful, harmless, and honest. The process works in stages: first, train a base language model on internet text (pre-training). Then, have human evaluators rank different model responses to the same prompt by quality. Use those rankings to train a reward model that predicts human preferences. Finally, use RL (typically PPO or a variant) to fine-tune the language model to maximize the reward model's score. This is what transforms a raw language model that might produce toxic, unhelpful, or dangerous outputs into a polished assistant. Anthropic's Constitutional AI extends this idea by having the model evaluate its own outputs against a set of principles, reducing the need for human labelers. Direct Preference Optimization (DPO) simplifies the pipeline further by eliminating the separate reward model entirely, directly optimizing the language model on preference data. Nearly every major chatbot — ChatGPT, Claude, Gemini, Command R — relies on some variant of this RL-based alignment process.
RL's next chapter is playing out in two domains. In AI agents, RL trains models to use tools, write and execute code, browse the web, and accomplish multi-step goals. OpenAI trained their o-series reasoning models with RL on coding and math tasks; the models learned to plan, backtrack, and verify — all emergent behaviors driven by reward signals for correct answers. In robotics, RL is finally delivering on decades of promise: Google DeepMind's RT-2 and Figure's humanoid robots use RL (often combined with imitation learning from human demonstrations) to manipulate objects, navigate environments, and adapt to novel situations. The biggest open problems remain sample efficiency (RL typically needs millions of trials to learn behaviors a human picks up in minutes), reward specification (defining what "good" means in complex real-world tasks without accidentally incentivizing unintended shortcuts), and sim-to-real transfer (policies learned in simulation often break when deployed on physical hardware, where friction, latency, and sensor noise create a reality gap).