Waypoint Models for Instruction-Guided Navigation in Continuous Environments

Jacob Krantz, Aaron Gokaslan, Dhruv Batra, Stefan Lee, Oleksandr Maksymets; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15162-15171

Abstract


Little inquiry has explicitly addressed the role of action spaces in language-guided visual navigation -- either in terms of its effect on navigation success or the efficiency with which a robotic agent could execute the resulting trajectory. Building on the recently released VLN-CE setting for instruction following in continuous environments, we develop a class of language-conditioned waypoint prediction networks to examine this question. We vary the expressivity of these models to explore a spectrum between low-level actions and continuous waypoint prediction. We measure task performance and estimated execution time on a profiled LoCoBot robot. We find more expressive models result in simpler, faster to execute trajectories, but lower-level actions can achieve better navigation metrics by approximating shortest paths better. Further, our models outperform prior work in VLN-CE and set a new state-of-the-art on the public leaderboard -- increasing success rate by 4% with our best model on this challenging task.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Krantz_2021_ICCV, author = {Krantz, Jacob and Gokaslan, Aaron and Batra, Dhruv and Lee, Stefan and Maksymets, Oleksandr}, title = {Waypoint Models for Instruction-Guided Navigation in Continuous Environments}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15162-15171} }