Boosting Image Restoration via Priors from Pre-trained Models

Xiaogang Xu, Shu Kong, Tao Hu, Zhe Liu, Hujun Bao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 2900-2909

Abstract


Pre-trained models with large-scale training data such as CLIP and Stable Diffusion have demonstrated remarkable performance in various high-level computer vision tasks such as image understanding and generation from language descriptions. Yet their potential for low-level tasks such as image restoration remains relatively unexplored. In this paper we explore such models to enhance image restoration. As off-the-shelf features (OSF) from pre-trained models do not directly serve image restoration we propose to learn an additional lightweight module called Pre-Train-Guided Refinement Module (PTG-RM) to refine restoration results of a target restoration network with OSF. PTG-RM consists of two components Pre-Train-Guided Spatial-Varying Enhancement (PTG-SVE) and Pre-Train-Guided Channel-Spatial Attention (PTG-CSA). PTG-SVE enables optimal short- and long-range neural operations while PTG-CSA enhances spatial-channel attention for restoration-related learning. Extensive experiments demonstrate that PTG-RM with its compact size (<1M parameters) effectively enhances restoration performance of various models across different tasks including low-light enhancement deraining deblurring and denoising.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Xu_2024_CVPR, author = {Xu, Xiaogang and Kong, Shu and Hu, Tao and Liu, Zhe and Bao, Hujun}, title = {Boosting Image Restoration via Priors from Pre-trained Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {2900-2909} }