Zubnet AI学习Wiki › Diffusion Transformer
Models

Diffusion Transformer

DiT
一种用 Transformer 替换扩散模型中传统 U-Net 骨干的架构。DiT 把注意力机制应用到图像生成,带来让 LLM 如此强大的同样的 scaling 行为。Sora、Flux、Stable Diffusion 3 和大多数最前沿的图像和视频生成器都用 DiT 或其变体。

为什么重要

DiT 把语言和图像生成的世界统一到单一架构范式下:Transformer。这意味着为 LLM 开发的 scaling laws、训练技巧、优化策略在很大程度上转移到图像和视频生成。这就是为什么图像质量改进得这么快 — 这个领域和语言乘的是同一条 scaling 曲线。

Deep Dive

The original DiT paper (Peebles & Xie, 2023) showed that simply replacing the U-Net with a standard Transformer and scaling it up produced better image quality. The Transformer processes image patches (similar to Vision Transformers) with added conditioning from the diffusion timestep and class labels. The key finding: DiT follows clear scaling laws — larger models and more compute produce predictably better images, just like with LLMs.

From U-Net to Transformer

U-Nets process images at multiple resolutions, downsampling then upsampling with skip connections. This inductive bias was useful when compute was limited, but it introduces architectural complexity and doesn't scale as cleanly. Transformers, with their uniform architecture, are simpler to scale and benefit more from additional compute and data. The trade-off: Transformers are more memory-hungry due to the quadratic attention over all image patches.

MM-DiT: Multi-Modal DiT

Stable Diffusion 3 and Flux use MM-DiT (Multi-Modal DiT), which processes text and image tokens through separate streams that interact via cross-attention. This is more effective than the simpler text-conditioning used in the original DiT. The text stream uses a frozen text encoder (like T5 or CLIP), and the image stream uses the diffusion process. The two streams exchange information at each Transformer block.

相关概念

← 所有术语
← Diffusion Model Distillation →