Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models

Agil Aghasanli, Dmitry Kangin, Plamen Angelov; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 467-474

Abstract


The process of recognizing and distinguishing between real content and content generated by deep learning algorithms, often referred to as deepfakes, is known as deepfake detection. In order to counter the rising threat of deepfakes and maintain the integrity of digital media, research is now being done to create more reliable and precise detection techniques. Deep learning models, such as Stable Diffusion, have been able to generate more detailed and less blurry images in recent years. In this paper, we develop a deepfake detection technique to distinguish original and fake images generated by various Diffusion Models. The developed methodology for deepfake detection takes advantage of features from fine-tuned Vision Transformers (ViTs), combined with existing classifiers such as Support Vector Machines (SVM). We demonstrate the proposed methodology's ability of interpretability-through-prototypes by analysing support vectors of the SVMs. Additionally, due to the novelty of the topic, there is a lack of open datasets for deepfake detection. Therefore, to evaluate the methodology, we have also created custom datasets based on various generative techniques of Diffusion Models on open datasets (ImageNet, FFHQ, Oxford-IIIT Pet). The code is available at https://github.com/lira-centre/

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Aghasanli_2023_ICCV, author = {Aghasanli, Agil and Kangin, Dmitry and Angelov, Plamen}, title = {Interpretable-Through-Prototypes Deepfake Detection for Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {467-474} }