-
[pdf]
[arXiv]
[bibtex]@InProceedings{Tsigos_2025_WACV, author = {Tsigos, Konstantinos and Apostolidis, Evlampios and Mezaris, Vasileios}, title = {Improving the Perturbation-Based Explanation of Deepfake Detectors Through the Use of Adversarially-Generated Samples}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {February}, year = {2025}, pages = {706-715} }
Improving the Perturbation-Based Explanation of Deepfake Detectors Through the Use of Adversarially-Generated Samples
Abstract
In this paper we introduce the idea of using adversarially-generated samples of the input images that were classified as deepfakes by a detector to form perturbation masks for inferring the importance of different input features and produce visual explanations. We generate these samples based on Natural Evolution Strategies aiming to flip the original deepfake detector's decision and classify these samples as real. We apply this idea to four perturbation-based explanation methods (LIME SHAP SOBOL and RISE) and evaluate the performance of the resulting modified methods using a SOTA deepfake detection model a benchmarking dataset (FaceForensics++) and a corresponding explanation evaluation framework. Our quantitative assessments document the mostly positive contribution of the proposed perturbation approach in the performance of explanation methods. Our qualitative analysis shows the capacity of the modified explanation methods to demarcate the manipulated image regions more accurately and thus to provide more useful explanations.
Related Material