G3raphGround: Graph-Based Language Grounding
Mohit Bajaj, Lanjun Wang, Leonid Sigal; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 4281-4290
Abstract
In this paper we present an end-to-end framework for grounding of phrases in images. In contrast to previous works, our model, which we call GraphGround, uses graphs to formulate more complex, non-sequential dependencies among proposal image regions and phrases. We capture intra-modal dependencies using a separate graph neural network for each modality (visual and lingual), and then use conditional message-passing in another graph neural network to fuse their outputs and capture cross-modal relationships. This final representation results in grounding decisions. The framework supports many-to-many matching and is able to ground single phrase to multiple image regions and vice versa. We validate our design choices through a series of ablation studies and illustrate state-of-the-art performance on Flickr30k and ReferIt Game benchmark datasets.
Related Material
[pdf]
[
bibtex]
@InProceedings{Bajaj_2019_ICCV,
author = {Bajaj, Mohit and Wang, Lanjun and Sigal, Leonid},
title = {G3raphGround: Graph-Based Language Grounding},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}