-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhu_2025_CVPR, author = {Zhu, Lianghui and Huang, Zilong and Liao, Bencheng and Liew, Jun Hao and Yan, Hanshu and Feng, Jiashi and Wang, Xinggang}, title = {DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {7664-7674} }
DiG: Scalable and Efficient Diffusion Models with Gated Linear Attention
Abstract
Diffusion models with large-scale pre-training have achieved significant success in the field of visual content generation, particularly exemplified by Diffusion Transformers (DiT). However, DiT models have faced challenges with quadratic complexity efficiency, especially when handling long sequences. In this paper, we aim to incorporate the sub-quadratic modeling capability of Gated Linear Attention (GLA) into the 2D diffusion backbone. Specifically, we introduce Diffusion Gated Linear Attention Transformers (DiG), a simple, adoptable solution with minimal parameter overhead. We offer two variants, i.e., a plain and U-shape architecture, showing superior efficiency and competitive effectiveness. In addition to superior performance to DiT and other sub-quadratic-time diffusion models at 256x256 resolution, DiG demonstrates greater efficiency than these methods starting from a 512 resolution. Specifically, DiG-S/2 is 2.5x faster and saves 75.7% GPU memory compared to DiT-S/2 at a 1792 resolution. Additionally, DiG-XL/2 is 4.2x faster than the Mamba-based model at a 1024 resolution and 1.8x faster than DiT with FlashAttention-2 at a 2048 resolution. The code is released at https://github.com/hustvl/DiG.
Related Material