Lighting, Reflectance and Geometry Estimation From 360deg Panoramic Stereo

Junxuan Li, Hongdong Li, Yasuyuki Matsushita; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10591-10600

Abstract


We propose a method for estimating high-definition spatially-varying lighting, reflectance, and geometry of a scene from 360deg stereo images. Our model takes advantage of the 360deg input to observe the entire scene with geometric detail, then jointly estimates the scene's properties with physical constraints. We first reconstruct a near-field environment light for predicting the lighting at any 3D location within the scene. Then we present a deep learning model that leverages the stereo information to infer the reflectance and surface normal. Lastly, we incorporate the physical constraints between lighting and geometry to refine the reflectance of the scene. Both quantitative and qualitative experiments show that our method, benefiting from the 360deg observation of the scene, outperforms prior state-of-the-art methods and enables more augmented reality applications such as mirror-objects insertion.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2021_CVPR, author = {Li, Junxuan and Li, Hongdong and Matsushita, Yasuyuki}, title = {Lighting, Reflectance and Geometry Estimation From 360deg Panoramic Stereo}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {10591-10600} }