Unpaired Face Restoration via Learnable Cross-Quality Shift

Yangyi Dong, Xiaoyun Zhang, Zhixin Wang, Ya Zhang, Siheng Chen, Yanfeng Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 667-675


Face restoration aims to recover high-quality (HQ) face images from low-quality (LQ) ones with various unknown degradations. Unpaired face restoration approaches focus on the adaptation to unseen degradations, which is a more challenging setting. Recently, generative facial priors of StyleGAN are used to improve the restoration capability of paired face restoration methods. For unpaired methods, however, using face priors is a challenge due to the lack of paired supervision. To address this issue, we take advantage of the editing capabilities of StyleGAN's latent code and propose a novel learnable cross-quality shift. The proposed learnable cross-quality shift not only introduces the generative facial priors into the unpaired framework, but also enables the straight-forward addition/subtraction in the latent feature space to achieve quality conversion. Furthermore, we design a two-branch framework with the proposed cross-quality shift to deal with unpaired data and improve the fidelity of restoration. With the unpaired framework, our method can be fine-tuned on images with unseen degradation. Experimental results show that (i) compared to state-of-the-art methods, our method improves performances under moderate and severe degradation situations; and (ii) both the proposed learnable cross-quality shift and the two-branch framework benefit the restoration performance.

Related Material

@InProceedings{Dong_2022_CVPR, author = {Dong, Yangyi and Zhang, Xiaoyun and Wang, Zhixin and Zhang, Ya and Chen, Siheng and Wang, Yanfeng}, title = {Unpaired Face Restoration via Learnable Cross-Quality Shift}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {667-675} }