The key principle: augmentations must preserve the label. Flipping a cat image horizontally still shows a cat (valid augmentation). Flipping a "turn left" sign makes it a "turn right" sign (invalid augmentation). Choosing appropriate augmentations requires understanding what invariances matter for your task.
AutoAugment and its successors (RandAugment, TrivialAugment) learn or randomize augmentation policies instead of hand-designing them. Cutout/CutMix randomly masks or mixes patches from different images. MixUp interpolates between pairs of examples, creating synthetic training points that smooth decision boundaries. These techniques are now standard in vision training pipelines.
With generative models, augmentation goes beyond geometric transforms. You can use LLMs to paraphrase text training data, use diffusion models to generate variant images, or use models to create entirely new training examples (synthetic data). The line between "augmentation" (modifying existing examples) and "synthetic data" (generating new examples) is blurring, and both are becoming essential parts of modern training pipelines.