DeCLIP: Decoding CLIP Representations for Deepfake Localization

Stefan Smeu, Elisabeta Oneata, Dan Oneata; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 149-159

Abstract


Generative models can create entirely new images but they can also partially modify real images in ways that are undetectable to the human eye. In this paper we address the challenge of automatically detecting such local manipulations. One of the most pressing problems in deepfake detection remains the ability of models to generalize to different classes of generators. In the case of fully manipulated images representations extracted from large self-supervised models (such as CLIP) provide a promising direction towards more robust detectors. Here we introduce DeCLIP-- a first attempt to leverage such large pretrained features for detecting local manipulations. We show that when combined with a reasonably large convolutional decoder pretrained self-supervised representations are able to perform localization and improve generalization capabilities over existing methods. Unlike previous work our approach is able to perform localization on the challenging case of latent diffusion models where the entire image is affected by the fingerprint of the generator. Moreover we observe that this type of data which combines local semantic information with a global fingerprint provides more stable generalization than other categories of generative methods

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Smeu_2025_WACV, author = {Smeu, Stefan and Oneata, Elisabeta and Oneata, Dan}, title = {DeCLIP: Decoding CLIP Representations for Deepfake Localization}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {149-159} }