Real-World Super-Resolution Using Generative Adversarial Networks

Haoyu Ren, Amin Kheradmand, Mostafa El-Khamy, Shuangquan Wang, Dongwoon Bai, Jungwon Lee; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 436-437


Robust real-world super-resolution (SR) aims to generate perception-oriented high-resolution (HR) images from the corresponding low-resolution (LR) ones, without access to the paired LR-HR ground-truth. In this paper, we investigate how to advance the state of the art in real-world SR. Our method involves deploying an ensemble of generative adversarial networks (GANs) for robust real-world SR. The ensemble deploys different GANs trained with different adversarial objectives. Due to the lack of knowledge about the ground-truth blur and noise models, we design a generic training set with the LR images generated by various degradation models from a set of HR images. We achieve good perceptual quality by super resolving the LR images whose degradation was caused by unknown image processing artifacts. For real-world SR on images captured by mobile devices, the GANs are trained by weak supervision of a mobile SR training set having LR-HR image pairs, which we construct from the DPED dataset which provides registered mobile-DSLR images at the same scale. Our ensemble of GANs uses cues from the image luminance and adjusts to generate better HR images at low-illumination. Experiments on the NTIRE 2020 real-world super-resolution dataset show that our proposed SR approach achieves good perceptual quality.

Related Material

author = {Ren, Haoyu and Kheradmand, Amin and El-Khamy, Mostafa and Wang, Shuangquan and Bai, Dongwoon and Lee, Jungwon},
title = {Real-World Super-Resolution Using Generative Adversarial Networks},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}