Robust RGB-D Odometry Using Point and Line Features
Yan Lu, Dezhen Song; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 3934-3942
Abstract
Lighting variation and uneven feature distribution are main challenges for indoor RGB-D visual odometry where color information is often combined with depth information. To meet the challenges, we fuse point and line features to form a robust odometry algorithm. Line features are abundant indoors and less sensitive to lighting change than points. We extract 3D points and lines from RGB-D data, analyze their measurement uncertainties, and compute camera motion using maximum likelihood estimation. We prove that fusing points and lines produces smaller motion estimate uncertainty than using either feature type alone. In experiments we compare our method with state-of-the-art methods including a keypoint-based approach and a dense visual odometry algorithm. Our method outperforms the counterparts under both constant and varying lighting conditions. Specifically, our method achieves an average translational error that is 34.9% smaller than the counterparts, when tested using public datasets.
Related Material
[pdf]
[
bibtex]
@InProceedings{Lu_2015_ICCV,
author = {Lu, Yan and Song, Dezhen},
title = {Robust RGB-D Odometry Using Point and Line Features},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}