DiffSeg: Towards Detecting Diffusion-Based Inpainting Attacks Using Multi-Feature Segmentation

Raphael Antonius Frick, Martin Steinebach; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 3802-3808

Abstract


With the advancements made in deep learning over the past years creating convincing media manipulations has become easy and accessible than ever before. In particular diffusion models such as Stable-Diffusion allow users to synthesize realistic images based on a given text input. Apart from synthesizing entirely new images diffusion models can also be used to make edits to images using inpainting. To combat the spread of disinformation and illegal content created with diffusion-based inpainting this paper presents a new detection method based on multi-feature segmentation. Apart from information derived from the raw pixel values noise and frequency information are also exploited to detect and localize regions that have been subject to editing. Evaluation results strongly suggest that the proposed method can achieve high mIoU and AUC scores outperforming state-of-the-art methods even for syntheses generated by unseen diffusion models or highly compressed images.

Related Material


[pdf]
[bibtex]
@InProceedings{Frick_2024_CVPR, author = {Frick, Raphael Antonius and Steinebach, Martin}, title = {DiffSeg: Towards Detecting Diffusion-Based Inpainting Attacks Using Multi-Feature Segmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {3802-3808} }