ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding

Zoey Guo, Yiwen Tang, Ray Zhang, Dong Wang, Zhigang Wang, Bin Zhao, Xuelong Li; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 15372-15383

Abstract


Understanding 3D scenes from multi-view inputs has been proven to alleviate the view discrepancy issue in 3D visual grounding. However, existing methods normally neglect the view cues embedded in the text modality and fail to weigh the relative importance of different views. In this paper, we propose ViewRefer, a multi-view framework for 3D visual grounding exploring how to grasp the view knowledge from both text and 3D modalities. For the text branch, ViewRefer leverages the diverse linguistic knowledge of large-scale language models to expand a single grounding text to multiple geometry-consistent descriptions. Meanwhile, in the 3D modality, a transformer fusion module with inter-view attention is introduced to boost the interaction of objects across views. On top of that, we further present a set of learnable multi-view prototypes, which memorize scene-agnostic knowledge for different views, and enhance the framework from two perspectives: a view-guided attention module for more robust text features, and a view-guided scoring strategy during the final prediction. With our designed paradigm, ViewRefer achieves superior performance on three benchmarks and surpasses the second-best by +2.8%, +1.5%, and +1.35% on Sr3D, Nr3D, and ScanRefer.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Guo_2023_ICCV, author = {Guo, Zoey and Tang, Yiwen and Zhang, Ray and Wang, Dong and Wang, Zhigang and Zhao, Bin and Li, Xuelong}, title = {ViewRefer: Grasp the Multi-view Knowledge for 3D Visual Grounding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {15372-15383} }