NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects

Dongqing Wang, Tong Zhang, Sabine Süsstrunk; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 317-327

Abstract


We propose NEMTO, the first end-to-end neural rendering pipeline to model 3D transparent objects with complex geometry and unknown indices of refraction. Commonly used appearance modeling such as the Disney BSDF model cannot accurately address this challenging problem due to the complex light paths bending through refractions and the strong dependency of surface appearance on illumination. With 2D images of the transparent object as input, our method is capable of high-quality novel view and relighting synthesis. We leverage implicit Signed Distance Functions (SDF) to model the object geometry and propose a refraction-aware ray bending network to model the effects of light refraction within the object. Our ray bending network is more tolerant to geometric inaccuracies than traditional physically-based methods for rendering transparent objects. We provide extensive evaluations on both synthetic and real-world datasets to demonstrate our high-quality synthesis and the applicability of our method.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wang_2023_ICCV, author = {Wang, Dongqing and Zhang, Tong and S\"usstrunk, Sabine}, title = {NEMTO: Neural Environment Matting for Novel View and Relighting Synthesis of Transparent Objects}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {317-327} }