Multi-Task Learning of Hierarchical Vision-Language Representation

Duy-Kien Nguyen, Takayuki Okatani; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 10492-10501

Abstract


It is still challenging to build an AI system that can perform tasks that involve vision and language at human level. So far, researchers have singled out individual tasks separately, for each of which they have designed networks and trained them on its dedicated datasets. Although this approach has seen a certain degree of success, it comes with difficulties of understanding relations among different tasks and transferring the knowledge learned for a task to others. We propose a multi-task learning approach that enables to learn vision-language representation that is shared by many tasks from their diverse datasets. The representation is hierarchical, and prediction for each task is computed from the representation at its corresponding level of the hierarchy. We show through experiments that our method consistently outperforms previous single-task-learning methods on image caption retrieval, visual question answering, and visual grounding. We also analyze the learned hierarchical representation by visualizing attention maps generated in our network.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Nguyen_2019_CVPR,
author = {Nguyen, Duy-Kien and Okatani, Takayuki},
title = {Multi-Task Learning of Hierarchical Vision-Language Representation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}