What Value Do Explicit High Level Concepts Have in Vision to Language Problems?

Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, Anton van den Hengel; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 203-212

Abstract


Much recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. In this paper we investigate whether this direct approach succeeds due to, or despite, the fact that it avoids the explicit representation of high-level information. We propose a method of incorporating high-level concepts into the successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. We achieve the best reported results on both image captioning and VQA on several benchmark datasets, and provide an analysis of the value of explicit high-level concepts in V2L problems.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wu_2016_CVPR,
author = {Wu, Qi and Shen, Chunhua and Liu, Lingqiao and Dick, Anthony and van den Hengel, Anton},
title = {What Value Do Explicit High Level Concepts Have in Vision to Language Problems?},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}