Large Scale Near-Duplicate Image Retrieval via Patch Embedding

Shangpeng Yan, Xiaoyun Zhang, Li Chen, Wenbo Bao, Zhiyong Gao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Large scale near-duplicate image retrieval (NDIR) relies on the Bag-of-Words methodology which quantizes local features into visual words. However, the direct match of these visual words typically leads to unpleasant mismatches due to quantization errors. To enhance the discriminability of the matching process, existing methods usually exploit hand-crafted contextual information, which has limited performance in complicated real-world scenarios. In contrast, we in this paper propose a trainable lightweight embedding network to extract local binary features. The network takes image patches as inputs and generates the binary code that can be efficiently stored in the inverted indexing file and helps discard mismatches immediately during the retrieval process. We improve the discriminability of the code by elaborately composing the training patches for network optimization, which consists of a proper inter-class (non-duplicate) patches selection and a rich intra-class (near-duplicate) patch generation. We evaluate our approach on the open NDIR dataset, INRIA CopyDays, and the experimental results show that our method performs favorably against the state-of-the-art algorithms. Furthermore, with a relatively short code length, our approach achieves higher query speed and lower storage occupation.

Related Material


[pdf]
[bibtex]
@InProceedings{Yan_2019_ICCV,
author = {Yan, Shangpeng and Zhang, Xiaoyun and Chen, Li and Bao, Wenbo and Gao, Zhiyong},
title = {Large Scale Near-Duplicate Image Retrieval via Patch Embedding},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}