LLaVA-SpaceSGG: Visual Instruct Tuning for Open-Vocabulary Scene Graph Generation with Enhanced Spatial Relations

Mingjie Xu, Mengyang Wu, Yuzhi Zhao, Jason Chun Lok Li, Weifeng Ou; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 6362-6372

Abstract


Scene Graph Generation (SGG) converts visual scenes into structured graph representations providing deeper scene understanding for complex vision tasks. However existing SGG models often overlook essential spatial relationships and struggle with generalization in open-vocabulary contexts. To address these limitations we propose LLaVASpaceSGG a multimodal large language model (MLLM) designed for open-vocabulary SGG with enhanced spatial relation modeling. To train it we collect the SGG instruction-tuning dataset named SpaceSGG. This dataset is constructed by combining publicly available datasets and synthesizing data using open-source models within our data construction pipeline. It combines object locations object relations and depth information resulting in three data formats: spatial SGG description question-answering and conversation. To enhance the transfer of MLLMs' inherent capabilities to the SGG task we introduce a two-stage training paradigm. Experiments show that LLaVASpaceSGG outperforms other open-vocabulary SGG methods boosting recall by 8.6% and mean recall by 28.4% compared to the baseline. Our codebase dataset and trained models are publicly accessible on GitHub at the following URL: https://github.com/Endlinc/LLaVA-SpaceSGG.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Xu_2025_WACV, author = {Xu, Mingjie and Wu, Mengyang and Zhao, Yuzhi and Li, Jason Chun Lok and Ou, Weifeng}, title = {LLaVA-SpaceSGG: Visual Instruct Tuning for Open-Vocabulary Scene Graph Generation with Enhanced Spatial Relations}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6362-6372} }