Semi-Supervised Generative Adversarial Hashing for Image Retrieval
Guan'an Wang, Qinghao Hu, Jian Cheng, Zengguang Hou; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 469-485
Abstract
With explosive growth of image and video data on the Internet, hashing technique has been extensively studied for large-scale visual search. Benefiting from the advance of deep learning, deep hashing methods have achieved promising performance. However, those deep hashing models are usually trained with supervised information, which is rare and expensive in practice, especially class labels. In this paper, inspired by the idea of generative models and the minimax two-player game, we propose a novel semi-supervised generative adversarial hashing (SSGAH) approach. Firstly, we unify a generative model, a discriminative model and a deep hashing model in a framework for making use of triplet-wise information and unlabeled data. Secondly, we design novel structure of the generative model and the discriminative model to learn the distribution of triplet-wise information in a semi-supervised way. In addition, we propose a semi-supervised ranking loss and an adversary ranking loss to learn binary codes which preserve semantic similarity for both labeled data and unlabeled data. Finally, by optimizing the whole model in an adversary training way, the learned binary codes can capture better semantic information of all data. Extensive empirical evaluations on two widely-used benchmark datasets show that our proposed approach significantly outperforms state-of-the-art hashing methods.
Related Material
[pdf]
[
bibtex]
@InProceedings{Wang_2018_ECCV,
author = {Wang, Guan'an and Hu, Qinghao and Cheng, Jian and Hou, Zengguang},
title = {Semi-Supervised Generative Adversarial Hashing for Image Retrieval},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}