Learning Visual Grounding from Generative Vision and Language Model

Shijie Wang, Dahun Kim, Ali Taalimi, Chen Sun, Weicheng Kuo; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 8046-8056

Abstract


Visual grounding tasks aim to localize image regions based on natural language references. In this work we explore whether generative VLMs predominantly trained on image-text data could be leveraged to scale up the text annotation of visual grounding data. We find that grounding knowledge already exists in generative VLM and can be elicited by proper prompting. We thus prompt a VLM to generate object-level descriptions by feeding it object regions from existing object detection datasets. We further propose attribute modeling to explicitly capture the important object attributes and spatial relation modeling to capture inter-object relationship both of which are common linguistic pattern in referring expression. Our constructed dataset (500K images 1M objects 16M referring expressions) is one of the largest grounding datasets to date and the first grounding dataset with purely model-generated queries and human-annotated objects. To verify the quality of this data we conduct zero-shot transfer experiments to the popular RefCOCO benchmarks for both referring expression comprehension (REC) and segmentation (RES) tasks. On both tasks our model significantly outperform the state-of-the-art approaches without using human annotated visual grounding data. Our results demonstrate the promise of generative VLM to scale up visual grounding in the real world.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wang_2025_WACV, author = {Wang, Shijie and Kim, Dahun and Taalimi, Ali and Sun, Chen and Kuo, Weicheng}, title = {Learning Visual Grounding from Generative Vision and Language Model}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {8046-8056} }