IFQA: Interpretable Face Quality Assessment

Byungho Jo, Donghyeon Cho, In Kyu Park, Sungeun Hong; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 3444-3453

Abstract


Existing face restoration models have relied on general assessment metrics that do not consider the characteristics of facial regions. Recent works have therefore assessed their methods using human studies, which is not scalable and involves significant effort. This paper proposes a novel face-centric metric based on an adversarial framework where a generator simulates face restoration and a discriminator assesses image quality. Specifically, our per-pixel discriminator enables interpretable evaluation that cannot be provided by traditional metrics. Moreover, our metric emphasizes facial primary regions considering that even minor changes to the eyes, nose, and mouth significantly affect human cognition. Our face-oriented metric consistently surpasses existing general or facial image quality assessment metrics by impressive margins. We demonstrate the generalizability of the proposed strategy in various architectural designs and challenging scenarios. Interestingly, we find that our IFQA can lead to performance improvement as an objective function. The code and models are available at https://github.com/VCLLab/IFQA.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Jo_2023_WACV, author = {Jo, Byungho and Cho, Donghyeon and Park, In Kyu and Hong, Sungeun}, title = {IFQA: Interpretable Face Quality Assessment}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {3444-3453} }