The intuition: imagine every point in "noise space" connected to a point in "image space" by a straight line. Flow matching trains a neural network to predict the velocity (direction and speed) along these paths at any point. To generate an image, you start from a random noise point and follow the velocity field to arrive at a clean image. The straighter the paths, the fewer steps you need — this is why "rectified flows" (which straighten the paths) are important.
Traditional diffusion models define a fixed forward process (gradually adding Gaussian noise) and learn the reverse process (denoising). The forward process is curved through high-dimensional space, requiring many small steps to reverse accurately (typically 20–50 steps). Flow matching learns more direct paths, often achieving equivalent quality in 4–10 steps. Some formulations (like consistency models) push this to a single step, though with some quality trade-off.
Mathematically, diffusion models and flow matching are both instances of continuous-time generative models — they differ in the probability paths they define between noise and data. This unified perspective is helping researchers design better training objectives and architectures that combine insights from both. The practical implication: the distinction between "diffusion model" and "flow matching model" is becoming more about training methodology than fundamental architecture.