CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster Image Generation

Kangfu Mei, Mauricio Delbracio, Hossein Talebi, Zhengzhong Tu, Vishal M. Patel, Peyman Milanfar; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 9048-9058

Abstract


Large generative diffusion models have revolutionized text-to-image generation and offer immense potential for conditional generation tasks such as image enhancement restoration editing and compositing. However their widespread adoption is hindered by the high computational cost which limits their real-time application. To address this challenge we introduce a novel method dubbed CoDi that adapts a pre-trained latent diffusion model to accept additional image conditioning inputs while significantly reducing the sampling steps required to achieve high-quality results. Our method can leverage architectures such as ControlNet to incorporate conditioning inputs without compromising the model's prior knowledge gained during large scale pre-training. Additionally a conditional consistency loss enforces consistent predictions across diffusion steps effectively compelling the model to generate high-quality images with conditions in a few steps. Our conditional-task learning and distillation approach outperforms previous distillation methods achieving a new state-of-the-art in producing high-quality images with very few steps (e.g. 1-4) across multiple tasks including super-resolution text-guided image editing and depth-to-image generation.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Mei_2024_CVPR, author = {Mei, Kangfu and Delbracio, Mauricio and Talebi, Hossein and Tu, Zhengzhong and Patel, Vishal M. and Milanfar, Peyman}, title = {CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster Image Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {9048-9058} }