Mind's Eye: A Recurrent Visual Representation for Image Caption Generation

Xinlei Chen, C. Lawrence Zitnick; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2422-2431

Abstract


In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. Critical to our approach is a recurrent neural network that attempts to dynamically build a visual representation of the scene as a caption is being generated or read. The representation automatically learns to remember long-term visual concepts. Our model is capable of both generating novel captions given an image, and reconstructing visual features given an image description. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are equal to or preferred by humans $21.0\%$ of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2015_CVPR,
author = {Chen, Xinlei and Lawrence Zitnick, C.},
title = {Mind's Eye: A Recurrent Visual Representation for Image Caption Generation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}