TFRGAN: Leveraging Text Information for Blind Face Restoration With Extreme Degradation

Chengxing Xie, Qian Ning, Weisheng Dong, Guangming Shi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 2535-2545

Abstract


Blind face restoration aims to recover high-quality face images from unknown degraded low-quality images. Previous works that are based on geometric or generative priors have achieved impressive performance, but the task remains challenging, particularly when it comes to restoring severely degraded faces. To address this issue, we propose a novel approach TFRGAN, that leverages textual information to improve the restoration of extremely degraded face images. Specifically, we propose to generate a better and more accurate latent code for StyleGAN2 prior via fusing the text and image information in the latent code space. Besides, extracted textual features are used to modulate the decoding features to obtain more realistic and natural facial images with more reasonable details. Experimental results demonstrate the superiority of the proposed method for restoring severely degraded face images.

Related Material


[pdf]
[bibtex]
@InProceedings{Xie_2023_CVPR, author = {Xie, Chengxing and Ning, Qian and Dong, Weisheng and Shi, Guangming}, title = {TFRGAN: Leveraging Text Information for Blind Face Restoration With Extreme Degradation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {2535-2545} }