Diffusion Model Compression for Image-to-Image Translation

Geonung Kim, Beomsu Kim, Eunhyeok Park, Sunghyun Cho; Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 2105-2123

Abstract


As recent advances in large-scale Text-to-Image (T2I) diffusion models have yielded remarkable high-quality image generation, diverse downstream Image-to-Image (I2I) applications have emerged. Despite the impressive results achieved by these I2I models, their practical utility is hampered by their large model size and the computational burden of the iterative denoising process. In this paper, we propose a novel compression method tailored for diffusion-based I2I models. Based on the observations that the image conditions of I2I models already provide rich information on image structures, and that the time steps with a larger impact tend to be biased, we develop surprisingly simple yet effective approaches for reducing the model size and latency. We validate the effectiveness of our method on three representative I2I tasks: InstructPix2Pix for image editing, StableSR for image restoration, and ControlNet for image-conditional image generation. Our approach achieves satisfactory output quality with 39.2%, 56.4% and 39.2% reduction in model footprint, as well as 81.4%, 68.7% and 31.1% decrease in latency to InstructPix2Pix, StableSR and ControlNet, respectively.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kim_2024_ACCV, author = {Kim, Geonung and Kim, Beomsu and Park, Eunhyeok and Cho, Sunghyun}, title = {Diffusion Model Compression for Image-to-Image Translation}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {2105-2123} }