Few-Shot Hash Learning for Image Retrieval

Yu-Xiong Wang, Liangke Gui, Martial Hebert; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1228-1237

Abstract


Current approaches to hash based semantic image retrieval assume a set of pre-defined categories and rely on supervised learning from a large number of annotated samples. The need for labeled samples limits their applicability in scenarios in which a user provides at query time a small set of training images defining a customized novel category. This paper addresses the problem of few-shot hash learning, in the spirit of one-shot learning in image recognition and classification and early work on locality sensitive hashing. More precisely, our approach is based on the insight that universal hash functions can be learned off-line from unlabeled data because of the information implicit in the density structure of a discriminative feature space. We can then select a task-specific combination of hash codes for a novel category from a few labeled samples. The resulting unsupervised generic hashing (UGH) significantly outperforms current supervised and unsupervised hashing approaches.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2017_ICCV,
author = {Wang, Yu-Xiong and Gui, Liangke and Hebert, Martial},
title = {Few-Shot Hash Learning for Image Retrieval},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}