RPNet: an End-to-End Network for Relative Camera Pose Estimation

Sovann En, Alexis Lechervy, Frederic Jurie; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


This paper addresses the task of relative camera pose estimation from raw image pixels, by means of deep neural networks. The proposed RPNet network takes pairs of images as input and directly infers the relative poses, without the need of camera intrinsic/extrinsic. While state-of-the-art systems based on SIFT + RANSAC, are able to recover the translation vector only up to scale, RPNet is trained to produce the full translation vector, in an end-to-end way. Experimental results on the Cambridge Landmark data set show very promising results regarding the recovery of the full translation vector. They also show that RPNet produces more accurate and more stable results than traditional approaches, especially for hard images (repetitive textures, textureless images, etc.). To the best of our knowledge, RPNet is the first attempt to recover full translation vectors in relative pose estimation.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{En_2018_ECCV_Workshops,
author = {En, Sovann and Lechervy, Alexis and Jurie, Frederic},
title = {RPNet: an End-to-End Network for Relative Camera Pose Estimation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}