-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Liu_2023_ICCV, author = {Liu, Shubo and Zhang, Hongsheng and Qi, Yuankai and Wang, Peng and Zhang, Yanning and Wu, Qi}, title = {AerialVLN: Vision-and-Language Navigation for UAVs}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {15384-15394} }
AerialVLN: Vision-and-Language Navigation for UAVs
Abstract
Recently emerged Vision-and-Language Navigation(VLN) tasks have drawn significant attention in both computer vision and natural language processing communities. Existing VLN tasks are built for agents that navigate on the ground, either indoors or outdoors. However, many tasks require intelligent agents to carry out in the sky, such as UAV-based goods delivery, traffic/security patrol, and scenery tour, to name a few. Navigating in the sky is more complicated than on the ground because agents need to consider the flying height and more complex spatial relationship reasoning. To fill this gap and facilitate research in this field, we propose a new task named AerialVLN, which is UAV-based and towards outdoor environments. We develop a 3D simulator rendered by near-realistic pictures of 25 city-level scenarios. Our simulator supports continuous navigation, environment extension and configuration. We also proposed an extended baseline model based on the widely-used cross modal-alignment (CMA) navigation methods. We find that there is still a significant gap between the baseline model and human performance, which suggests AerialVLN is a new challenging task.
Related Material