Learning Robust Visual-Semantic Embeddings

Yao-Hung Hubert Tsai, Liang-Kang Huang, Ruslan Salakhutdinov; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3571-3580

Abstract


Many of the existing methods for learning joint embedding of images and text use only supervised information from paired images and its textual attributes. Taking advantage of the recent success of unsupervised learning in deep neural networks, we propose an end-to-end learning framework that is able to extract more robust multi-modal representations across domains. The proposed method combines representation learning models (i.e., auto-encoders) together with cross-domain learning criteria (i.e., Maximum Mean Discrepancy loss) to learn joint embeddings for semantic and visual features. A novel technique of unsupervised-data adaptation inference is introduced to construct more comprehensive embeddings for both labeled and unlabeled data. We evaluate our method on Animals with Attributes and Caltech-UCSD Birds 200-2011 dataset with a wide range of applications, including zero and few-shot image recognition and retrieval, from inductive to transductive settings. Empirically, we show that our framework improves over the current state of the art on many of the considered tasks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Tsai_2017_ICCV,
author = {Hubert Tsai, Yao-Hung and Huang, Liang-Kang and Salakhutdinov, Ruslan},
title = {Learning Robust Visual-Semantic Embeddings},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}