The task: given an image, predict 2D coordinates (x, y) for each keypoint (17 for body: nose, eyes, ears, shoulders, elbows, wrists, hips, knees, ankles). Top-down approaches first detect people (bounding boxes), then estimate pose within each box. Bottom-up approaches detect all keypoints first, then group them into individuals. Top-down is more accurate for few people; bottom-up is faster for crowds.
2D pose gives (x, y) in image coordinates. 3D pose estimates (x, y, z) in real-world coordinates, enabling depth perception (is the hand reaching toward or away from the camera?). 3D pose is essential for motion capture, VR/AR, and robotics. Models like MotionBERT and 4DHumans estimate 3D pose from a single 2D image by leveraging learned priors about human body proportions and physics.
Hand pose estimation tracks 21 keypoints per hand, enabling gesture recognition and sign language understanding. Face landmark detection tracks 468+ points for expression analysis, face filters, and emotion recognition. Animal pose estimation adapts the same techniques to quadrupeds, enabling wildlife research and veterinary applications. MediaPipe (Google) provides real-time solutions for body, hand, and face pose that run on mobile devices.