-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Wang_2023_CVPR, author = {Wang, Zhixin and Zhang, Ziying and Zhang, Xiaoyun and Zheng, Huangjie and Zhou, Mingyuan and Zhang, Ya and Wang, Yanfeng}, title = {DR2: Diffusion-Based Robust Degradation Remover for Blind Face Restoration}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2023}, pages = {1704-1713} }
DR2: Diffusion-Based Robust Degradation Remover for Blind Face Restoration
Abstract
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training, while more complex cases could happen in the real world. This gap between the assumed and actual degradation hurts the restoration performance where artifacts are often observed in the output. However, it is expensive and infeasible to include every type of degradation to cover real-world cases in the training data. To tackle this robustness issue, we propose Diffusion-based Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image. By leveraging a well-performing denoising diffusion probabilistic model, our DR2 diffuses input images to a noisy status where various types of degradation give way to Gaussian noise, and then captures semantic information through iterative denoising steps. As a result, DR2 is robust against common degradation (e.g. blur, resize, noise and compression) and compatible with different designs of enhancement modules. Experiments in various settings show that our framework outperforms state-of-the-art methods on heavily degraded synthetic and real-world datasets.
Related Material