-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Dong_2025_WACV, author = {Dong, Zhenyuan and Zhang, Sai Qian}, title = {DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4606-4615} }
DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing
Abstract
Diffusion Transformers (DiTs) have recently attracted significant interest from both industry and academia due to their enhanced capabilities in visual generation surpassing the performance of traditional diffusion models that employ U-Net. However the improved performance of DiTs comes at the expense of higher parameter counts and implementation costs which significantly limits their deployment on resource-constrained devices like mobile phones. We propose DiTAS a data-free post-training quantization (PTQ) method for efficient DiT inference. DiTAS relies on the proposed temporal-aggregated smoothing techniques to mitigate the impact of the channel-wise outliers within the input activations leading to much lower quantization error under extremely low bitwidth. To further enhance the performance of the quantized DiT we adopt the layer-wise grid search strategy to optimize the smoothing factor. Moreover we integrate a training-free LoRA module for weight quantization leveraging alternating optimization to minimize quantization errors without additional fine-tuning. Experimental results demonstrate that our approach enables 4-bit weight 8-bit activation (W4A8) quantization for DiTs while maintaining comparable performance as the full-precision model. Code is available at https://github.com/DZY122/DiTAS.
Related Material