Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings

Shweta Mahajan, Teresa Botschen, Iryna Gurevych, Stefan Roth; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


One of the key challenges in learning joint embeddings of multiple modalities, e.g. of images and text, is to ensure coherent cross-modal semantics that generalize across datasets. We propose to address this through joint Gaussian regularization of the latent representations. Building on Wasserstein autoencoders (WAEs) to encode the input in each domain, we enforce the latent embeddings to be similar to a Gaussian prior that is shared across the two domains, ensuring compatible continuity of the encoded semantic representations of images and texts. Semantic alignment is achieved through supervision from matching image-text pairs. To show the benefits of our semi-supervised representation, we apply it to cross-modal retrieval and phrase localization. We not only achieve state-of-the-art accuracy, but significantly better generalization across datasets, owing to the semantic continuity of the latent space.

Related Material


[pdf]
[bibtex]
@InProceedings{Mahajan_2019_ICCV,
author = {Mahajan, Shweta and Botschen, Teresa and Gurevych, Iryna and Roth, Stefan},
title = {Joint Wasserstein Autoencoders for Aligning Multimodal Embeddings},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}