Self-Calibrating Neural Radiance Fields

Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Anima Anandkumar, Minsu Cho, Jaesik Park; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 5846-5854

Abstract


In this work, we propose a camera self-calibration algorithm for generic cameras with arbitrary non-linear distortions. We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects. Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions. While traditional self-calibration algorithms mostly rely on geometric constraints, we additionally incorporate photometric consistency. This requires learning the geometry of the scene, and we use Neural Radiance Fields (NeRF). We also propose a new geometric loss function, viz., projected ray distance loss, to incorporate geometric consistency for complex non-linear camera models. We validate our approach on standard real image datasets and demonstrate that our model can learn the camera intrinsics and extrinsics (pose) from scratch without COLMAP initialization. Also, we show that learning accurate camera models in a differentiable manner allows us to improve PSNR over baselines.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jeong_2021_ICCV, author = {Jeong, Yoonwoo and Ahn, Seokjun and Choy, Christopher and Anandkumar, Anima and Cho, Minsu and Park, Jaesik}, title = {Self-Calibrating Neural Radiance Fields}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {5846-5854} }