Alternating Co-Quantization for Cross-Modal Hashing
Go Irie, Hiroyuki Arai, Yukinobu Taniguchi; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1886-1894
Abstract
This paper addresses the problem of unsupervised learning of binary hash codes for efficient cross-modal retrieval. Many unimodal hashing studies have proven that both similarity preservation of data and maintenance of quantization quality are essential for improving retrieval performance with binary hash codes. However, most existing cross-modal hashing methods mainly have focused on the former, and the latter still remains almost untouched. We propose a method to minimize the binary quantization errors, which is tailored to cross-modal hashing. Our approach, named Alternating Co-Quantization (ACQ), alternately seeks binary quantizers for each modality space with the help of connections to other modality data so that they give minimal quantization errors while preserving data similarities. ACQ can be coupled with various existing cross-modal dimension reduction methods such as Canonical Correlation Analysis (CCA) and substantially boosts their retrieval performance in the Hamming space. Extensive experiments demonstrate that ACQ can outperform several state-of-the-art methods, even when it is combined with simple CCA.
Related Material
[pdf]
[
bibtex]
@InProceedings{Irie_2015_ICCV,
author = {Irie, Go and Arai, Hiroyuki and Taniguchi, Yukinobu},
title = {Alternating Co-Quantization for Cross-Modal Hashing},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}