Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach

Angelo Carraggi, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Visual-semantic embeddings have been extensively used as a powerful model for cross-modal retrieval of images and sentences. In this setting, data coming from different modalities can be projected in a common embedding space, in which distances can be used to infer the similarity between pairs of images and sentences. While this approach has shown impressive performances on fully supervised settings, its application to semi-supervised scenarios has been rarely investigated. In this paper we propose a domain adaptation model for cross-modal retrieval, in which the knowledge learned from a supervised dataset can be transferred on a target dataset in which the pairing between images and sentences is not known, or not useful for training due to the limited size of the set. Experiments are performed on two target unsupervised scenarios, respectively related to the fashion and cultural heritage domain. Results show that our model is able to effectively transfer the knowledge learned on ordinary visual-semantic datasets, achieving promising results. As an additional contribution, we collect and release the dataset used for the cultural heritage domain.

Related Material


[pdf]
[bibtex]
@InProceedings{Carraggi_2018_ECCV_Workshops,
author = {Carraggi, Angelo and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
title = {Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}