CFSNet: Toward a Controllable Feature Space for Image Restoration

Wei Wang, Ruiming Guo, Yapeng Tian, Wenming Yang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4140-4149

Abstract


Deep learning methods have witnessed the great progress in image restoration with specific metrics (e.g., PSNR, SSIM). However, the perceptual quality of the restored image is relatively subjective, and it is necessary for users to control the reconstruction result according to personal preferences or image characteristics, which cannot be done using existing deterministic networks. This motivates us to exquisitely design a unified interactive framework for general image restoration tasks. Under this framework, users can control continuous transition of different objectives, e.g., the perception-distortion trade-off of image super-resolution, the trade-off between noise reduction and detail preservation. We achieve this goal by controlling the latent features of the designed network. To be specific, our proposed framework, named Controllable Feature Space Network (CFSNet), is entangled by two branches based on different objectives. Our framework can adaptively learn the coupling coefficients of different layers and channels, which provides finer control of the restored image quality. Experiments on several typical image restoration tasks fully validate the effective benefits of the proposed method. Code is available at https://github.com/qibao77/CFSNet.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wang_2019_ICCV,
author = {Wang, Wei and Guo, Ruiming and Tian, Yapeng and Yang, Wenming},
title = {CFSNet: Toward a Controllable Feature Space for Image Restoration},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}