HashGAN: Deep Learning to Hash With Pair Conditional Wasserstein GAN

Yue Cao, Bin Liu, Mingsheng Long, Jianmin Wang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 1287-1296

Abstract


Deep learning to hash improves image retrieval performance by end-to-end representation learning and hash coding from training data with pairwise similarity information. Subject to the scarcity of similarity information that is often expensive to collect for many application domains, existing deep learning to hash methods may overfit the training data and result in substantial loss of retrieval quality. This paper presents HashGAN, a novel architecture for deep learning to hash, which learns compact binary hash codes from both real images and diverse images synthesized by generative models. The main idea is to augment the training data with nearly real images synthesized from a new Pair Conditional Wasserstein GAN (PC-WGAN) conditioned on the pairwise similarity information. Extensive experiments demonstrate that HashGAN can generate high-quality binary hash codes and yield state-of-the-art image retrieval performance on three benchmarks, NUS-WIDE, CIFAR-10, and MS-COCO.

Related Material


[pdf]
[bibtex]
@InProceedings{Cao_2018_CVPR,
author = {Cao, Yue and Liu, Bin and Long, Mingsheng and Wang, Jianmin},
title = {HashGAN: Deep Learning to Hash With Pair Conditional Wasserstein GAN},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}