-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zheng_2024_CVPR, author = {Zheng, Dian and Wu, Xiao-Ming and Yang, Shuzhou and Zhang, Jian and Hu, Jian-Fang and Zheng, Wei-Shi}, title = {Selective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {25445-25455} }
Selective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model
Abstract
Universal image restoration is a practical and potential computer vision task for real-world applications. The main challenge of this task is handling the different degradation distributions at once. Existing methods mainly utilize task-specific conditions (e.g. prompt) to guide the model to learn different distributions separately named multi-partite mapping. However it is not suitable for universal model learning as it ignores the shared information between different tasks. In this work we propose an advanced selective hourglass mapping strategy based on diffusion model termed DiffUIR. Two novel considerations make our DiffUIR non-trivial. Firstly we equip the model with strong condition guidance to obtain accurate generation direction of diffusion model (selective). More importantly DiffUIR integrates a flexible shared distribution term (SDT) into the diffusion algorithm elegantly and naturally which gradually maps different distributions into a shared one. In the reverse process combined with SDT and strong condition guidance DiffUIR iteratively guides the shared distribution to the task-specific distribution with high image quality (hourglass). Without bells and whistles by only modifying the mapping strategy we achieve state-of-the-art performance on five image restoration tasks 22 benchmarks in the universal setting and zero-shot generalization setting. Surprisingly by only using a lightweight model (only 0.89M) we could achieve outstanding performance. The source code and pre-trained models are available at https://github.com/iSEE-Laboratory/DiffUIR
Related Material