Weakly-Supervised Deepfake Localization in Diffusion-Generated Images

Dragoș-Constantin Țânțaru, Elisabeta Oneață, Dan Oneață; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 6258-6268

Abstract


The remarkable generative capabilities of denoising diffusion models have raised new concerns regarding the authenticity of the images we see every day on the Internet. However, the vast majority of existing deepfake detection models are tested against previous generative approaches (e.g. GAN) and usually provide only a "fake" or "real" label per image. We believe a more informative output would be to augment the per-image label with a localization map indicating which regions of the input have been manipulated. To this end, we frame this task as a weakly-supervised localization problem and identify three main categories of methods (based on either explanations, local scores or attention), which we compare on an equal footing by using the Xception network as the common backbone architecture. We provide a careful analysis of all the main factors that parameterize the design space: choice of method, type of supervision, dataset and generator used in the creation of manipulated images; our study is enabled by constructing datasets in which only one of the components is varied. Our results show that weakly-supervised localization is attainable, with the best performing detection method (based on local scores) being less sensitive to the looser supervision than to the mismatch in terms of dataset or generator.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Tantaru_2024_WACV, author = {Ț\^anțaru, Dragoș-Constantin and Oneaț\u{a}, Elisabeta and Oneaț\u{a}, Dan}, title = {Weakly-Supervised Deepfake Localization in Diffusion-Generated Images}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {6258-6268} }