Embodied Amodal Recognition: Learning to Move to Perceive Objects

Jianwei Yang, Zhile Ren, Mingze Xu, Xinlei Chen, David J. Crandall, Devi Parikh, Dhruv Batra; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2040-2050

Abstract


Passive visual systems typically fail to recognize objects in the amodal setting where they are heavily occluded. In contrast, humans and other embodied agents have the ability to move in the environment and actively control the viewing angle to better understand object shapes and semantics. In this work, we introduce the task of Embodied Amodel Recognition (EAR): an agent is instantiated in a 3D environment close to an occluded target object, and is free to move in the environment to perform object classification, amodal object localization, and amodal object segmentation. To address this problem, we develop a new model called Embodied Mask R-CNN for agents to learn to move strategically to improve their visual recognition abilities. We conduct experiments using a simulator for indoor environments. Experimental results show that: 1) agents with embodiment (movement) achieve better visual recognition performance than passive ones and 2) in order to improve visual recognition abilities, agents can learn strategic paths that are different from shortest paths.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2019_ICCV,
author = {Yang, Jianwei and Ren, Zhile and Xu, Mingze and Chen, Xinlei and Crandall, David J. and Parikh, Devi and Batra, Dhruv},
title = {Embodied Amodal Recognition: Learning to Move to Perceive Objects},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}