Generating Material-Aware 3D Models from Sparse Views

Shi Mao, Chenming Wu, Ran Yi, Zhelun Shen, Liangjun Zhang, Wolfgang Heidrich; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 1400-1409

Abstract


Image-to-3D diffusion models have significantly advanced 3D content generation. However existing methods often struggle to disentangle material and illumination from coupled appearance as they primarily focus on modeling geometry and appearance. This paper introduces a novel approach to generate material-aware 3D models from sparse-view images using generative models and efficient pre-integrated rendering. The output of our method is a relightable model that independently models geometry material and lighting enabling downstream tasks to manipulate these components separately. To fully leverage information from limited sparse views we propose a mixed supervision framework that simultaneously exploits view-consistency via captured views and diffusion prior via generating views. Additionally a view selection mechanism is proposed to mitigate the degenerated diffusion prior. We adapt an efficient yet powerful pre-integrated rendering pipeline to factorize the scene into a differentiable environment illumination a spatially varying material field and an implicit SDF field. Our experiments on both real-world and synthetic datasets demonstrate the effectiveness of our approach in decomposing each component as well as manipulating the illumination. Source codes are available at https://github.com/Sheldonmao/MatSparse3D

Related Material


[pdf]
[bibtex]
@InProceedings{Mao_2024_CVPR, author = {Mao, Shi and Wu, Chenming and Yi, Ran and Shen, Zhelun and Zhang, Liangjun and Heidrich, Wolfgang}, title = {Generating Material-Aware 3D Models from Sparse Views}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {1400-1409} }