A Comprehensive Framework for Evaluating Deepfake Generators: Dataset, Metrics Performance, and Comparative Analysis

Sahar Husseini, Jean-Luc Dugelay; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 372-381

Abstract


Assessing the realism and accuracy of deepfake generators, especially in cross-reenactment situations, is a major challenge. This challenge is primarily attributed to the absence of ground-truth data, which restricts the application of metrics that rely on explicit ground-truth, such as SSIM and LPIPS. To overcome this challenge, this paper introduces a novel protocol for quantitatively assessing images generated by face-reenactment techniques. To address the scarcity of suitable datasets, two video datasets are generated: the Real Head and the synthesized Metahuman datasets. Furthermore, user studies are conducted to evaluate the efficacy of our proposed protocol. The results demonstrate a strong correlation between subjective evaluations and quantitative metrics obtained within our protocol. Comparative analysis with existing evaluation protocols further validates the effectiveness of our proposed approach. Notably, our protocol exhibits superior performance in analyzing identity preservation, head pose, and facial expression replication. The source code and datasets are made publicly available at https://github.com/SaharHusseini/deepfake_evaluation.git

Related Material


[pdf]
[bibtex]
@InProceedings{Husseini_2023_ICCV, author = {Husseini, Sahar and Dugelay, Jean-Luc}, title = {A Comprehensive Framework for Evaluating Deepfake Generators: Dataset, Metrics Performance, and Comparative Analysis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {372-381} }