UprightNet: Geometry-Aware Camera Orientation Estimation From Single Images

Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, Noah Snavely; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 9974-9983

Abstract


We introduce UprightNet, a learning-based approach for estimating 2DoF camera orientation from a single RGB image of an indoor scene. Unlike recent methods that leverage deep learning to perform black-box regression from image to orientation parameters, we propose an end-to-end framework that incorporates explicit geometric reasoning. In particular, we design a network that predicts two representations of scene geometry, in both the local camera and global reference coordinate systems, and solves for the camera orientation as the rotation that best aligns these two predictions via a differentiable least squares module. This network can be trained end-to-end, and can be supervised with both ground truth camera poses and intermediate representations of surface geometry. We evaluate UprightNet on the single-image camera orientation task on synthetic and real datasets, and show significant improvements over prior state-of-the-art approaches.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Xian_2019_ICCV,
author = {Xian, Wenqi and Li, Zhengqi and Fisher, Matthew and Eisenmann, Jonathan and Shechtman, Eli and Snavely, Noah},
title = {UprightNet: Geometry-Aware Camera Orientation Estimation From Single Images},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}