Clockwork Diffusion: Efficient Generation With Model-Step Distillation

Amirhossein Habibian, Amir Ghodrati, Noor Fathima, Guillaume Sautiere, Risheek Garrepalli, Fatih Porikli, Jens Petersen; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 8352-8361

Abstract


This work aims to improve the efficiency of text-to-image diffusion models. While diffusion models use computationally expensive UNet-based denoising operations in every generation step we identify that not all operations are equally relevant for the final output quality. In particular we observe that UNet layers operating on high-res feature maps are relatively sensitive to small perturbations. In contrast low-res feature maps influence the semantic layout of the final image and can often be perturbed with no noticeable change in the output. Based on this observation we propose Clockwork Diffusion a method that periodically reuses computation from preceding denoising steps to approximate low-res feature maps at one or more subsequent steps. For multiple base- lines and for both text-to-image generation and image editing we demonstrate that Clockwork leads to comparable or improved perceptual scores with drastically reduced computational complexity. As an example for Stable Diffusion v1.5 with 8 DPM++ steps we save 32% of FLOPs with negligible FID and CLIP change. We re- lease code at https://github.com/Qualcomm-AI-research/clockwork-diffusion

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Habibian_2024_CVPR, author = {Habibian, Amirhossein and Ghodrati, Amir and Fathima, Noor and Sautiere, Guillaume and Garrepalli, Risheek and Porikli, Fatih and Petersen, Jens}, title = {Clockwork Diffusion: Efficient Generation With Model-Step Distillation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {8352-8361} }