Zubnet AIAprenderWiki › CLIP
Models

CLIP

Contrastive Language-Image Pre-training
Un modelo de OpenAI (2021) que aprende a conectar imágenes y texto entrenando en 400 millones de pares imagen-caption. CLIP codifica imágenes y texto en el mismo espacio de embedding, donde los pares imagen-texto que coinciden están cerca y los que no coinciden están lejos. Es el puente entre lenguaje y visión en la mayoría de sistemas de IA multimodales modernos.

Por qué importa

CLIP es la columna vertebral de la generación text-to-image (Stable Diffusion, DALL-E), búsqueda de imágenes, clasificación de imágenes zero-shot, y comprensión multimodal. Cuando escribes un prompt y obtienes una imagen, CLIP (o un descendiente) es lo que conecta tus palabras con conceptos visuales. Probó que puedes aprender representaciones visuales poderosas solo de supervisión de lenguaje natural, sin datasets de imágenes etiquetadas.

Deep Dive

CLIP trains two encoders simultaneously: a text encoder (Transformer) and an image encoder (ViT or ResNet). During training, a batch of N image-caption pairs produces N text embeddings and N image embeddings. The training objective maximizes cosine similarity for the N correct pairs while minimizing it for the N²−N incorrect pairs. This contrastive objective teaches both encoders to produce aligned representations.

Zero-Shot Classification

CLIP can classify images into categories it was never explicitly trained on. To classify an image as "cat" or "dog," encode the text "a photo of a cat" and "a photo of a dog," encode the image, and pick the text with higher cosine similarity to the image. This zero-shot capability was revolutionary: a single model could handle any classification task by changing the text labels, without any task-specific training data.

CLIP in Diffusion Models

In text-to-image models, CLIP's text encoder converts your prompt into embeddings that guide image generation via cross-attention. The quality of CLIP's text understanding directly affects how well the image matches your prompt. Newer models use stronger text encoders (T5, which understands compositional language better) alongside or instead of CLIP, improving prompt following for complex descriptions. But CLIP's image encoder remains widely used for image understanding tasks.

Conceptos relacionados

← Todos los términos
← Classification Clustering →