ControlNet (Zhang et al., 2023) works by creating a trainable copy of the diffusion model's encoder and connecting it to the original model via zero-initialized convolution layers. The control signal (edge map, pose, depth) is processed by this copy, and the features are added to the main model's corresponding layers. The zero initialization means the control starts with no effect and gradually learns to guide generation during training, preserving the original model's quality.
Common control inputs: Canny edges (outline structure), OpenPose (human body pose), depth maps (3D structure), segmentation maps (which region is what), normal maps (surface orientation), and scribbles (rough sketches). Each control type requires a separately trained ControlNet. Multiple controls can be combined: a pose skeleton plus an edge map gives you both body position and structural details.
Beyond spatial control, techniques like IP-Adapter provide style control: give a reference image and generate new images in the same style. T2I-Adapter is a lighter alternative to ControlNet that achieves similar control with fewer parameters. The trend is toward increasingly precise, composable control — specifying exactly what you want through a combination of text, spatial guides, style references, and iterative refinement.