Generating Image Descriptions Using Semantic Similarities in the Output Space

Yashaswi Verma, Ankush Gupta, Prashanth Mannem, C.V. Jawahar; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2013, pp. 288-293

Abstract


Automatically generating meaningful descriptions for images has recently emerged as an important area of research. In this direction, a nearest-neighbour based generative phrase prediction model (PPM) proposed by (Gupta et al. 2012) was shown to achieve state-of-the-art results on PASCAL sentence dataset, thanks to the simultaneous use of three different sources of information (i.e. visual clues, corpus statistics and available descriptions). However, they do not utilize semantic similarities among the phrases that might be helpful in relating semantically similar phrases during phrase relevance prediction. In this paper, we extend their model by considering inter-phrase semantic similarities. To compute similarity between two phrases, we consider similarities among their constituent words determined using WordNet. We also re-formulate their objective function for parameter learning by penalizing each pair of phrases unevenly, in a manner similar to that in structured predictions. Various automatic and human evaluations are performed to demonstrate the advantage of our "semantic phrase prediction model" (SPPM) over PPM.

Related Material


[pdf]
[bibtex]
@InProceedings{Verma_2013_CVPR_Workshops,
author = {Verma, Yashaswi and Gupta, Ankush and Mannem, Prashanth and Jawahar, C.V.},
title = {Generating Image Descriptions Using Semantic Similarities in the Output Space},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2013}
}