EMPNet: Neural Localisation and Mapping Using Embedded Memory Points

Gil Avraham, Yan Zuo, Thanuja Dharmasiri, Tom Drummond; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8120-8129

Abstract


Continuously estimating an agent's state space and a representation of its surroundings has proven vital towards full autonomy. A shared common ground among systems which successfully achieve this feat is the integration of previously encountered observations into the current state being estimated. This necessitates the use of a memory module for incorporating previously visited states whilst simultaneously offering an internal representation of the observed environment. In this work we develop a memory module which contains rigidly aligned point-embeddings that represent a coherent scene structure acquired from an RGB-D sequence of observations. The point-embeddings are extracted using modern convolutional neural network architectures, and alignment is performed by computing a dense correspondence matrix between a new observation and the current embeddings residing in the memory module. The whole framework is end-to-end trainable, resulting in a recurrent joint optimisation of the point-embeddings contained in the memory. This process amplifies the shared information across states, providing increased robustness and accuracy. We show significant improvement of our method across a set of experiments performed on the synthetic VIZDoom environment and a real world Active Vision Dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Avraham_2019_ICCV,
author = {Avraham, Gil and Zuo, Yan and Dharmasiri, Thanuja and Drummond, Tom},
title = {EMPNet: Neural Localisation and Mapping Using Embedded Memory Points},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}