Exploiting Distortion Information for Multi-Degraded Image Restoration

Wooksu Shin, Namhyuk Ahn, Jeong-Hyeon Moon, Kyung-Ah Sohn; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 537-546


In recent years, tremendous studies have been performed on the image distortion restoration task and deep learning-based methods have shown prominent performance improvement. However, assuming only a single distortion to an image may not be applicable in many real-world scenarios. To mitigate the issue, some studies have proposed multi-distortion datasets by applying the corruptions sequentially or spatially. In this work, we integrate the two perspectives on the multi-distortion nature and propose a new dataset that is a holistic multi-distortion dataset. To restore the multi-distortion effectively, we introduce a distortion information-guided restoration network, which exploits the conditional distortion information when reconstructing a given image. To do that, our framework first predicts the distortion type and their strength and delivers these to the restoration module. In our experiments, we show that the proposed model exceeds the others and we also demonstrate that any backbone network benefits from receiving the distortion information as prior knowledge.

Related Material

@InProceedings{Shin_2022_CVPR, author = {Shin, Wooksu and Ahn, Namhyuk and Moon, Jeong-Hyeon and Sohn, Kyung-Ah}, title = {Exploiting Distortion Information for Multi-Degraded Image Restoration}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {537-546} }