Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector

Xiao Guo, Xiufeng Song, Yue Zhang, Xiaohong Liu, Xiaoming Liu; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 105-116

Abstract


Deepfake detection is a long-established research topic vital for mitigating the spread of malicious misinformation. Unlike prior methods that provide either binary classification results or textual explanations separately, we introduce a novel method capable of generating both simultaneously. Our method harnesses the multi-modal learning capability of the pre-trained CLIP and the unprecedented interpretability of large language models (LLMs) to enhance both the generalization and explainability of deepfake detection. Specifically, we introduce a multi-modal face forgery detector (M2F2-Det) that employs tailored face forgery prompt learning, incorporating the pre-trained CLIP to improve generalization to unseen forgeries. Also, M2F2-Det incorporates an LLM to provide detailed textual explanations of its detection decisions, enhancing interpretability by bridging the gap between natural language and subtle cues of facial forgeries. Empirically, we evaluate M2F2-Det on both detection and explanation generation tasks, where it achieves state-of-the-art performance, demonstrating its effectiveness in identifying and explaining diverse forgeries. Source code and models are available at this link.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Guo_2025_CVPR, author = {Guo, Xiao and Song, Xiufeng and Zhang, Yue and Liu, Xiaohong and Liu, Xiaoming}, title = {Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {105-116} }