HashNet: Deep Learning to Hash by Continuation
Zhangjie Cao, Mingsheng Long, Jianmin Wang, Philip S. Yu; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5608-5617
Abstract
Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Cao_2017_ICCV,
author = {Cao, Zhangjie and Long, Mingsheng and Wang, Jianmin and Yu, Philip S.},
title = {HashNet: Deep Learning to Hash by Continuation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}