Coupled Manifold Learning for Retrieval Across Modalities

Anees Kazi, Sailesh Conjeti, Amin Katouzian, Nassir Navab; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1321-1328

Abstract


Coupled Manifold Learning (CpML) is targeted at aligning data manifolds across two related modalities to facilitate similarity preserving cross-modal retrieval. Towards this we propose a learning paradigm which simultaneously aligns global topology while preserving local manifold structure. The global topologies are maintained by recovering underlying mapping functions in the joint manifold space by deploying partially corresponding instances. The inter- and intra-modality affinity matrices are then computed to reinforce original data skeleton using perturbed minimum spanning tree (pMST), and maximizing the affinity among similar cross-modal instances, respectively. The performance of proposed algorithm is evaluated upon two benchmark multi-modal image-text datasets (Wikipedia and PascalVOC2012 - Sentence). We exhaustively validate and compare CpML to other joint-manifold learning methods and demonstrate superior performance across datasets and tasks.

Related Material


[pdf]
[bibtex]
@InProceedings{Kazi_2017_ICCV,
author = {Kazi, Anees and Conjeti, Sailesh and Katouzian, Amin and Navab, Nassir},
title = {Coupled Manifold Learning for Retrieval Across Modalities},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}