-
[pdf]
[bibtex]@InProceedings{Abouee_2024_CVPR, author = {Abouee, Amin and Ravi, Ashwanth and Hinneburg, Lars and Dziwulski, Mateusz and \"Olsner, Florian and Hess, J\"urgen and Milz, Stefan and M\"ader, Patrik}, title = {Weakly Supervised End2End Deep Visual Odometry}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {858-865} }
Weakly Supervised End2End Deep Visual Odometry
Abstract
Visual odometry is an ill-posed problem and utilized in many robotics applications especially automated driving for mapless navigation. Recent applications have shown that deep models outperform traditional approaches especially in localization accuracy and furthermore significantly reduce catastrophic failures. The disadvantage of most of these models is a strong dependence on high-quantity and high-quality ground truth data. However accurate and dense depth ground truth data for real world datasets is difficult to obtain. As a result deep models are often trained on synthetic data which introduces a domain gap. We present a weakly supervised approach to overcome this limitation. Our approach uses estimated optical flow for training that can be generated without the need for high-quality dense depth ground truth. Instead it only requires ground truth poses and raw camera images for training. In the experiments we show that our approach enables deep visual odometry to be efficiently trained on the target domain (real data) while achieving state-of-the-art performance on the KITTI dataset.
Related Material