Heterogeneous Interactive Learning Network for Unsupervised Cross-modal Retrieval

Yuanchao Zheng, Xiaowei Zhang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 4665-4680

Abstract


Cross-modal hashing has received a lot of attention because of its unique characteristic of low storage cost and high retrieval efficiency. However, these existing cross-modal retrieval approaches often fail to align effectively semantic information due to information asymmetry between image and text modality. To address this issue, we propose Heterogeneous Interactive Learning Network(HILN) for unsupervised cross-modal retrieval to alleviate the problem of the heterogeneous semantic gap. Specifically, we introduce a multi-head self-attention mechanism to capture the global dependencies of semantic features within the modality. Moreover, since the semantic relations among object entities from different modalities exist consistency, we perform heterogeneous feature fusion through the heterogeneous feature interaction module, especially through the cross attention in it to learn the interaction between different modal features. Finally, to further maintain semantic consistency, we introduce adversarial loss into network learning to generate more robust hash codes. Extensive experiments demonstrate that the proposed HILN improves the accuracy of T - I and I - T cross-modal retrieval tasks by 7.6% and 5.5% over the best competitor DGCPN on the NUS-WIDE dataset, respectively. Code is available at https://github.com/Z000204/HILN.

Related Material


[pdf] [code]
[bibtex]
@InProceedings{Zheng_2022_ACCV, author = {Zheng, Yuanchao and Zhang, Xiaowei}, title = {Heterogeneous Interactive Learning Network for Unsupervised Cross-modal Retrieval}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {4665-4680} }