DDNeRF: Depth Distribution Neural Radiance Fields

David Dadon, Ohad Fried, Yacov Hel-Or; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 755-763

Abstract


The field of implicit neural representation has made significant progress. Models such as neural radiance fields (NeRF), which uses relatively small neural networks, can represent high-quality scenes and achieve state-of-the-art results for novel view synthesis. Training these types of networks, however, is still computationally expensive and the model struggles with real life 360 degree scenes. In this work, we propose the depth distribution neural radiance field (DDNeRF), a new method that significantly increases sampling efficiency along rays during training, while achieving superior results for a given sampling budget. DDNeRF achieves this performance by learning a more accurate representation of the density distribution along rays. More specifically, the proposed framework trains a coarse model to predict the internal distribution of the transparency of an input volume along each ray. This estimated distribution then guides the sampling procedure of the fine model. Our method allows using fewer samples during training while achieving better output quality with the same computational resources.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Dadon_2023_WACV, author = {Dadon, David and Fried, Ohad and Hel-Or, Yacov}, title = {DDNeRF: Depth Distribution Neural Radiance Fields}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {755-763} }