Recon3D: High Quality 3D Reconstruction from a Single Image Using Generated Back-View Explicit Priors

Ruiyang Chen, Mohan Yin, Jiawei Shen, Wei Ma; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 2802-2811

Abstract


Significant progress has been achieved in deep 3D reconstruction from a single frontal view with the aid of generative models; however the unreliable nature of generated multi-views continues to present challenges in this domain. In this study we propose Recon3D a novel framework for 3D reconstruction. Recon3D exclusively utilizes a generated back view which can be obtained more reliably through generative models based on the frontal reference image as explicit priors. By incorporating these priors and guidance from a generative model which is fine-tuned with Dreambooth and then enhanced with ControlNet we effectively supervise NeRF rendering in the latent space. Subsequently we convert the NeRF representation into an explicit point cloud and further optimize the explicit representation by referencing high-quality textured reference views. Extensive experiments demonstrate that our method achieves state-of-the-art performance in rendering novel views with superior geometry and texture quality.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Ruiyang and Yin, Mohan and Shen, Jiawei and Ma, Wei}, title = {Recon3D: High Quality 3D Reconstruction from a Single Image Using Generated Back-View Explicit Priors}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2802-2811} }