SISL:Self-Supervised Image Signature Learning for Splicing Detection & Localization

Susmit Agrawal, Prabhat Kumar, Siddharth Seth, Toufiq Parag, Maneesh Singh, R. Venkatesh Babu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 22-32


Recent algorithms for image manipulation detection almost exclusively use deep network models. These approaches require either dense pixelwise groundtruth masks, camera ids, or image metadata to train the networks. On one hand, constructing a training set to represent the countless tampering possibilities is impractical. On the other hand, social media platforms or commercial applications are often constrained to remove camera ids as well as metadata from images. A self-supervised algorithm for training manipulation detection models without dense groundtruth or camera/image metadata would be extremely useful for many forensics applications. In this paper, we propose a self-supervised approach for training splicing detection/localization models from frequency transform of images. To identify the spliced regions, our deep network learns a representation to capture an image-specific signature by enforcing (image) self consistency. We experimentally demonstrate that our proposed model can yield similar or better performances as compared to multiple existing methods on standard datasets without relying on labels or metadata.

Related Material

@InProceedings{Agrawal_2022_CVPR, author = {Agrawal, Susmit and Kumar, Prabhat and Seth, Siddharth and Parag, Toufiq and Singh, Maneesh and Babu, R. Venkatesh}, title = {SISL:Self-Supervised Image Signature Learning for Splicing Detection \& Localization}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {22-32} }