Bird's-Eye-View Scene Graph for Vision-Language Navigation

Rui Liu, Xiaohan Wang, Wenguan Wang, Yi Yang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 10968-10980


Vision-language navigation (VLN), which entails an agent to navigate 3D environments following human instructions, has shown great advances. However, current agents are built upon panoramic observations, which hinders their ability to perceive 3D scene geometry and easily leads to ambiguous selection of panoramic view. To address these limitations, we present a BEV Scene Graph (BSG), which leverages multi-step BEV representations to encode scene layouts and geometric cues of indoor environment under the supervision of 3D detection. During navigation, BSG builds a local BEV representation at each step and maintains a BEV-based global scene map, which stores and organizes all the online collected local BEV representations according to their topological relations. Based on BSG, the agent predicts a local BEV grid-level decision score and a global graph-level decision score, combined with a subview selection score on panoramic views, for more accurate action prediction. Our approach significantly outperforms state-of-the-art methods on REVERIE, R2R, and R4R, showing the potential of BEV perception in VLN.

Related Material

[pdf] [supp]
@InProceedings{Liu_2023_ICCV, author = {Liu, Rui and Wang, Xiaohan and Wang, Wenguan and Yang, Yi}, title = {Bird's-Eye-View Scene Graph for Vision-Language Navigation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {10968-10980} }