SplitNet: Sim2Sim and Task2Task Transfer for Embodied Visual Navigation

Daniel Gordon, Abhishek Kadian, Devi Parikh, Judy Hoffman, Dhruv Batra; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 1022-1031

Abstract


We propose SplitNet, a method for decoupling visual perception and policy learning. By incorporating auxiliary tasks and selective learning of portions of the model, we explicitly decompose the learning objectives for visual navigation into perceiving the world and acting on that perception. We show improvements over baseline models on transferring between simulators, an encouraging step towards Sim2Real. Additionally, SplitNet generalizes better to unseen environments from the same simulator and transfers faster and more effectively to novel embodied navigation tasks. Further, given only a small sample from a target domain, SplitNet can match the performance of traditional end-to-end pipelines which receive the entire dataset

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Gordon_2019_ICCV,
author = {Gordon, Daniel and Kadian, Abhishek and Parikh, Devi and Hoffman, Judy and Batra, Dhruv},
title = {SplitNet: Sim2Sim and Task2Task Transfer for Embodied Visual Navigation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}