Towards Unsupervised Blind Face Restoration using Diffusion Prior

Tianshu Kuai, Sina Honari, Igor Gilitschenski, Alex Levinshtein; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 1839-1849

Abstract


Blind face restoration methods have shown remarkable performance particularly when trained on large-scale synthetic datasets with supervised learning. These datasets are often generated by simulating low-quality face images with a handcrafted image degradation pipeline. The models trained on such synthetic degradations however cannot deal with inputs of unseen degradations. In this paper we address this issue by using only a set of input images with unknown degradations and without ground truth targets to fine-tune a restoration model that learns to map them to clean and contextually consistent outputs. We utilize a pre-trained diffusion model as a generative prior through which we generate high quality images from the natural image distribution while maintaining the input image content through consistency constraints. These generated images are then used as pseudo targets to fine-tune a pre-trained restoration model. Unlike many recent approaches that employ diffusion models at test time we only do so during training and thus maintain an efficient inference-time performance. Extensive experiments show that the proposed approach can consistently improve the perceptual quality of pre-trained blind face restoration models while maintaining great consistency with the input contents. Our best model also achieves the state-of-the-art results on both synthetic and real-world datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kuai_2025_WACV, author = {Kuai, Tianshu and Honari, Sina and Gilitschenski, Igor and Levinshtein, Alex}, title = {Towards Unsupervised Blind Face Restoration using Diffusion Prior}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1839-1849} }