Paying Attention to Descriptions Generated by Image Captioning Models

Hamed R. Tavakoli, Rakshith Shetty, Ali Borji, Jorma Laaksonen; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2487-2496

Abstract


To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliency-boosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization is, however, observed for the saliency-boosted model on unseen data.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Tavakoli_2017_ICCV,
author = {Tavakoli, Hamed R. and Shetty, Rakshith and Borji, Ali and Laaksonen, Jorma},
title = {Paying Attention to Descriptions Generated by Image Captioning Models},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}