Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-Training

Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, Jianfeng Gao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13137-13146

Abstract


Learning to navigate in a visual environment following natural-language instructions is a challenging task, because the multimodal inputs to the agent are highly variable, and the training data on a new task is often limited. In this paper, we present the first pre-training and fine-tuning paradigm for vision-and-language navigation (VLN) tasks. By training on a large amount of image-text-action triplets in a self-supervised learning manner, the pre-trained model provides generic representations of visual environments and language instructions. It can be easily used as a drop-in for existing VLN frameworks, leading to the proposed agent PREVALENT. It learns more effectively in new tasks and generalizes better in a previously unseen environment. The performance is validated on three VLN tasks. On the Room-to-Room benchmark, our model improves the state-of-the-art from 47% to 51% on success rate weighted by path length. Further, the learned representation is transferable to other VLN tasks. On two recent tasks, vision-and-dialog navigation and "Help, Anna!", the proposed PREVALENT leads to significant improvement over existing methods, achieving a new state of the art.

Related Material


[pdf] [supp] [arXiv] [video]
[bibtex]
@InProceedings{Hao_2020_CVPR,
author = {Hao, Weituo and Li, Chunyuan and Li, Xiujun and Carin, Lawrence and Gao, Jianfeng},
title = {Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-Training},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}