-
[pdf]
[bibtex]@InProceedings{Mei_2025_WACV, author = {Mei, Kangfu and Nair, Nithin Gopalakrishnan and Patel, Vishal}, title = {Improving Conditional Diffusion Models through Re-Noising from Unconditional Diffusion Priors}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {3792-3801} }
Improving Conditional Diffusion Models through Re-Noising from Unconditional Diffusion Priors
Abstract
Conditional diffusion probabilistic models can model the distribution of natural images and can generate diverse and realistic samples based on given conditions. However oftentimes their results can be unrealistic with observable color shifts and textures. We believe that this issue results from the divergence between the probabilistic distribution learned by the model and the distribution of natural images. The delicate conditions gradually enlarge the divergence during each sampling timestep. To address this issue we introduce a new method that brings the predicted samples to the training data manifold using a pretrained unconditional diffusion model. The unconditional model acts as a regularizer and reduces the divergence introduced by the conditional model at each sampling step. We perform comprehensive experiments to demonstrate the effectiveness of our approach on super-resolution colorization turbulence removal and image-deraining tasks. The improvements obtained by our method suggest that the priors can be incorporated as a general plugin for improving conditional diffusion models.
Related Material