A CNN's core operation is convolution: a small filter (say 3×3 pixels) slides across the image, computing a dot product at each position to detect a specific pattern. Early layers learn simple patterns (edges, color gradients). Deeper layers combine these into increasingly complex features (eyes, wheels, faces). Pooling layers downsample between convolution layers, reducing spatial dimensions while preserving important features.
Two key properties make CNNs efficient: translation equivariance (a cat is a cat regardless of where it appears in the image — the same filter detects it everywhere) and locality (nearby pixels are more related than distant ones). These properties drastically reduce the number of parameters compared to fully connected networks, making CNNs tractable for high-resolution images.
CNNs aren't limited to images. 1D convolutions process sequences (audio waveforms, time series). WaveNet (for speech synthesis) and some text classification models use 1D CNNs. In audio, spectrograms are treated as 2D images and processed with standard 2D CNNs. Even in the Transformer era, some hybrid architectures use convolutional layers for local feature extraction before feeding into attention layers.