Short-Term Prediction and Multi-Camera Fusion on Semantic Grids

Lukas Hoyer, Patrick Kesper, Anna Khoreva, Volker Fischer; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0


An environment representation (ER) is a substantial part of every autonomous system. It introduces a common interface between perception and other system components, such as decision making, and allows downstream algorithms to deal with abstract data without knowledge of the used sensor. In this work, we propose and evaluate a novel architecture that generates an egocentric, grid-based, predictive, and semantically-interpretable ER, which we call semantic grid. We show that our approach supports the spatio-temporal fusion of multiple camera sequences and short-term prediction in such an ER. Our design utilizes a strong semantic segmentation network together with depth and egomotion estimates to first extract semantic information from multiple camera streams and then transform these separately into egocentric temporally-aligned bird's-eye view grids. A deep encoder-decoder network is trained to fuse a stack of these grids into a unified semantic grid and to predict the dynamics of its surrounding. We evaluate this representation on real-world sequences of Cityscapes and show that our architecture can make accurate predictions in complex sensor fusion scenarios and significantly outperforms a model-driven baseline in a category-based evaluation.

Related Material

author = {Hoyer, Lukas and Kesper, Patrick and Khoreva, Anna and Fischer, Volker},
title = {Short-Term Prediction and Multi-Camera Fusion on Semantic Grids},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}