Deep Image Comparator: Learning To Visualize Editorial Change

Alexander Black, Tu Bui, Hailin Jin, Vishy Swaminathan, John Collomosse; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 972-980

Abstract


We present a novel architecture for comparing a pair of images to identify image regions that have been subjected to editorial manipulation. We first describe a robust near-duplicate search, for matching a potentially manipulated image circulating online to an image within a trusted database of originals. We then describe a novel architecture for comparing that image pair, to localize regions that have been manipulated to differ from the retrieved original. The localization ignores discrepancies due to benign image transformations that commonly occur during online redistribution. These include artifacts due to noise and recompression degradation, as well as out-of-place transformations due to image padding, warping, and changes in size and shape. Robustness towards out-of-place transformations is achieved via the end-to-end training of a differentiable warping module within the comparator architecture. We demonstrate effective retrieval and comparison of benign transformed and manipulated images, over a dataset of millions of photographs.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Black_2021_CVPR, author = {Black, Alexander and Bui, Tu and Jin, Hailin and Swaminathan, Vishy and Collomosse, John}, title = {Deep Image Comparator: Learning To Visualize Editorial Change}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {972-980} }