Zubnet AILearnWiki › CLIP
Models

CLIP

Contrastive Language-Image Pre-training
A model from OpenAI (2021) that learns to connect images and text by training on 400 million image-caption pairs. CLIP encodes images and text into the same embedding space, where matching image-text pairs are close together and non-matching pairs are far apart. It's the bridge between language and vision in most modern multimodal AI systems.

Why it matters

CLIP is the backbone of text-to-image generation (Stable Diffusion, DALL-E), image search, zero-shot image classification, and multimodal understanding. When you type a prompt and get an image, CLIP (or a descendant) is what connects your words to visual concepts. It proved that you can learn powerful visual representations from natural language supervision alone, without labeled image datasets.

Deep Dive

CLIP trains two encoders simultaneously: a text encoder (Transformer) and an image encoder (ViT or ResNet). During training, a batch of N image-caption pairs produces N text embeddings and N image embeddings. The training objective maximizes cosine similarity for the N correct pairs while minimizing it for the N²−N incorrect pairs. This contrastive objective teaches both encoders to produce aligned representations.

Zero-Shot Classification

CLIP can classify images into categories it was never explicitly trained on. To classify an image as "cat" or "dog," encode the text "a photo of a cat" and "a photo of a dog," encode the image, and pick the text with higher cosine similarity to the image. This zero-shot capability was revolutionary: a single model could handle any classification task by changing the text labels, without any task-specific training data.

CLIP in Diffusion Models

In text-to-image models, CLIP's text encoder converts your prompt into embeddings that guide image generation via cross-attention. The quality of CLIP's text understanding directly affects how well the image matches your prompt. Newer models use stronger text encoders (T5, which understands compositional language better) alongside or instead of CLIP, improving prompt following for complex descriptions. But CLIP's image encoder remains widely used for image understanding tasks.

Related Concepts

← All Terms
← Classification Clustering →