Phrase Localization and Visual Relationship Detection With Comprehensive Image-Language Cues

Bryan A. Plummer, Arun Mallya, Christopher M. Cervantes, Julia Hockenmaier, Svetlana Lazebnik; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1928-1937

Abstract


This paper presents a framework for localization or grounding of phrases in images using a large collection of linguistic and visual cues. We model the appearance, size, and position of entity bounding boxes, adjectives that contain attribute information, and spatial relationships between pairs of entities connected by verbs or prepositions. Special attention is given to relationships between people and clothing or body part mentions, as they are useful for distinguishing individuals. We automatically learn weights for combining these cues and at test time, perform joint inference over all phrases in a caption. The resulting system produces state of the art performance on phrase localization on the Flickr30k Entities dataset and visual relationship detection on the Stanford VRD dataset.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Plummer_2017_ICCV,
author = {Plummer, Bryan A. and Mallya, Arun and Cervantes, Christopher M. and Hockenmaier, Julia and Lazebnik, Svetlana},
title = {Phrase Localization and Visual Relationship Detection With Comprehensive Image-Language Cues},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}