Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View With a Reachability Prior

Osama Makansi, Ozgun Cicek, Kevin Buchicchio, Thomas Brox; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 4354-4363

Abstract


In this paper, we investigate the problem of anticipating future dynamics, particularly the future location of other vehicles and pedestrians, in the view of a moving vehicle. We approach two fundamental challenges: (1) the partial visibility due to the egocentric view with a single RGB camera and considerable field-of-view change due to the egomotion of the vehicle; (2) the multimodality of the distribution of future states. In contrast to many previous works, we do not assume structural knowledge from maps. We rather estimate a reachability prior for certain classes of objects from the semantic map of the present image and propagate it into the future using the planned egomotion. Experiments show that the reachability prior combined with multi-hypotheses learning improves multimodal prediction of the future location of tracked objects and, for the first time, the emergence of new objects. We also demonstrate promising zero-shot transfer to unseen datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Makansi_2020_CVPR,
author = {Makansi, Osama and Cicek, Ozgun and Buchicchio, Kevin and Brox, Thomas},
title = {Multimodal Future Localization and Emergence Prediction for Objects in Egocentric View With a Reachability Prior},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}