Panoptic Narrative Grounding

Cristina González, Nicolás Ayobi, Isabela Hernández, José Hernández, Jordi Pont-Tuset, Pablo Arbeláez; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1364-1373

Abstract


This paper proposes Panoptic Narrative Grounding, a spatially fine and general formulation of the natural language visual grounding problem. We establish an experimental framework for the study of this new task, including new ground truth and metrics, and we propose a strong baseline method to serve as stepping stone for future work. We exploit the intrinsic semantic richness in an image by including panoptic categories, and we approach visual grounding at a fine-grained level by using segmentations. In terms of ground truth, we propose an algorithm to automatically transfer Localized Narratives annotations to specific regions in the panoptic segmentations of the MS COCO dataset. To guarantee the quality of our annotations, we take advantage of the semantic structure contained in WordNet to exclusively incorporate noun phrases that are grounded to a meaningfully related panoptic segmentation region. The proposed baseline achieves a performance of 55.4 absolute Average Recall points. This result is a suitable foundation to push the envelope further in the development of methods for Panoptic Narrative Grounding.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Gonzalez_2021_ICCV, author = {Gonz\'alez, Cristina and Ayobi, Nicol\'as and Hern\'andez, Isabela and Hern\'andez, Jos\'e and Pont-Tuset, Jordi and Arbel\'aez, Pablo}, title = {Panoptic Narrative Grounding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1364-1373} }