Cross-Modality Binary Code Learning via Fusion Similarity Hashing

Hong Liu, Rongrong Ji, Yongjian Wu, Feiyue Huang, Baochang Zhang; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 7380-7388

Abstract


Binary code learning has been emerging topic in large-scale cross-modality retrieval recently. It aims to map features from multiple modalities into a common Hamming space, where the cross-modality similarity can be approximated efficiently via Hamming distance. To this end, most existing works learn binary codes directly from data instances in multiple modalities, which preserve both intra- and inter-modal similarities respectively. Few methods consider to preserve the "fusion similarity" among multi-modal instances instead, which can explicitly capture their heterogeneous correlation in cross-modality retrieval. In this paper, we propose a hashing scheme, termed Fusion Similarity Hashing (FSH), which explicitly embeds the graph-based fusion similarity across modalities into a common Hamming space. Inspired by the "fusion by diffusion", our core idea is to construct an undirected asymmetric graph to model the fusion similarity among different modalities, upon which a graph hashing scheme with alternating optimization is introduced to learn binary codes that embeds such fusion similarity. Quantitative evaluations on three widely used benchmarks, i.e., UCI Handwritten Digit, MIR-Flickr25K and NUS-WIDE, demonstrate that the proposed FSH approach can achieve superior performance over the state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2017_CVPR,
author = {Liu, Hong and Ji, Rongrong and Wu, Yongjian and Huang, Feiyue and Zhang, Baochang},
title = {Cross-Modality Binary Code Learning via Fusion Similarity Hashing},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}