Zubnet AIAprenderWiki › Diffusion Transformer
Models

Diffusion Transformer

DiT
Uma arquitetura que substitui o backbone U-Net tradicionalmente usado em modelos de difusão por um Transformer. DiT aplica o mecanismo de atenção à geração de imagens, habilitando o mesmo comportamento de escala que tornou LLMs tão poderosos. Sora, Flux, Stable Diffusion 3 e a maioria dos geradores de imagem e vídeo de ponta usam DiT ou variantes.

Por que importa

DiT unificou os mundos de geração de linguagem e imagem sob um único paradigma arquitetural: o Transformer. Isso significa que as leis de escala, técnicas de treinamento e estratégias de otimização desenvolvidas para LLMs se transferem em grande parte para geração de imagem e vídeo. É por que a qualidade de imagem melhorou tão rapidamente — o campo surfa na mesma curva de escala que a linguagem.

Deep Dive

The original DiT paper (Peebles & Xie, 2023) showed that simply replacing the U-Net with a standard Transformer and scaling it up produced better image quality. The Transformer processes image patches (similar to Vision Transformers) with added conditioning from the diffusion timestep and class labels. The key finding: DiT follows clear scaling laws — larger models and more compute produce predictably better images, just like with LLMs.

From U-Net to Transformer

U-Nets process images at multiple resolutions, downsampling then upsampling with skip connections. This inductive bias was useful when compute was limited, but it introduces architectural complexity and doesn't scale as cleanly. Transformers, with their uniform architecture, are simpler to scale and benefit more from additional compute and data. The trade-off: Transformers are more memory-hungry due to the quadratic attention over all image patches.

MM-DiT: Multi-Modal DiT

Stable Diffusion 3 and Flux use MM-DiT (Multi-Modal DiT), which processes text and image tokens through separate streams that interact via cross-attention. This is more effective than the simpler text-conditioning used in the original DiT. The text stream uses a frozen text encoder (like T5 or CLIP), and the image stream uses the diffusion process. The two streams exchange information at each Transformer block.

Conceitos relacionados

← Todos os termos
← Diffusion Model Distillation →