Visual-Instructed Degradation Diffusion for All-in-One Image Restoration

Wenyang Luo, Haina Qin, Zewen Chen, Libin Wang, Dandan Zheng, Yuming Li, Yufan Liu, Bing Li, Weiming Hu; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 12764-12777

Abstract


Image restoration tasks like deblurring, denoising, and dehazing usually need distinct models for each degradation type, restricting their generalization in real-world scenarios with mixed or unknown degradations. In this work, we propose Defusion, a novel all-in-one image restoration framework that utilizes visual instruction-guided degradation diffusion. Unlike existing methods that rely on task-specific models or ambiguous text-based priors, Defusion constructs explicit visual instructions that align with the visual degradation patterns. These instructions are grounded by applying degradations to standardized visual elements, capturing intrinsic degradation features while agnostic to image semantics. Defusion then uses these visual instructions to guide a diffusion-based model that operates directly in the degradation space, where it reconstructs high-quality images by denoising the degradation effects with enhanced stability and generalizability. Comprehensive experiments demonstrate that Defusion outperforms state-of-the-art methods across diverse image restoration tasks, including complex and real-world degradations.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Luo_2025_CVPR, author = {Luo, Wenyang and Qin, Haina and Chen, Zewen and Wang, Libin and Zheng, Dandan and Li, Yuming and Liu, Yufan and Li, Bing and Hu, Weiming}, title = {Visual-Instructed Degradation Diffusion for All-in-One Image Restoration}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {12764-12777} }