LensNeRF: Rethinking Volume Rendering Based on Thin-Lens Camera Model

Min-Jung Kim, Gyojung Gu, Jaegul Choo; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 3182-3191

Abstract


Recent advances in Neural Radiance Field (NeRF) show promising results in rendering realistic novel view images. However, NeRF and its variants assume that input images are captured using a pinhole camera and that subjects in images are always all-in-focus by tacit agreement. In this paper, we propose aperture-aware NeRF optimization and rendering methods using a thin-lens model (dubbed LensNeRF), which allows defocus images of any aperture size as input and output. To generalize a pinhole camera model to a thin-lens camera model in NeRF framework, we define multiple rays originating from the aperture area, solving world-to-pixel scale ambiguity. Also, we propose in-focus loss that assigns the given pixel color to points on the focus plane to alleviate the color ambiguity caused by the use of multiple rays. For the rigorous evaluation of the proposed method, we collect a real forward-facing dataset with different F-numbers for each viewpoint. Experimental results demonstrate that our method successfully fuses an aperture-size adjustable thin-lens camera model into the NeRF architecture, showing favorable qualitative and quantitative results compared to baseline models. The dataset will be made available.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Kim_2024_WACV, author = {Kim, Min-Jung and Gu, Gyojung and Choo, Jaegul}, title = {LensNeRF: Rethinking Volume Rendering Based on Thin-Lens Camera Model}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {3182-3191} }