LIST: Learning Implicitly from Spatial Transformers for Single-View 3D Reconstruction

Mohammad Samiul Arshad, William J. Beksi; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 9321-9330

Abstract


Accurate reconstruction of both the geometric and topological details of a 3D object from a single 2D image embodies a fundamental challenge in computer vision. Existing explicit/implicit solutions to this problem struggle to recover self-occluded geometry and/or faithfully reconstruct topological shape structures. To resolve this dilemma, we introduce LIST, a novel neural architecture that leverages local and global image features to accurately reconstruct the geometric and topological structure of a 3D object from a single image. We utilize global 2D features to predict a coarse shape of the target object and then use it as a base for higher-resolution reconstruction. By leveraging both local 2D features from the image and 3D features from the coarse prediction, we can predict the signed distance between an arbitrary point and the target surface via an implicit predictor with great accuracy. Furthermore, our model does not require camera estimation or pixel alignment. It provides an uninfluenced reconstruction from the input-view direction. Through qualitative and quantitative analysis, we show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Arshad_2023_ICCV, author = {Arshad, Mohammad Samiul and Beksi, William J.}, title = {LIST: Learning Implicitly from Spatial Transformers for Single-View 3D Reconstruction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {9321-9330} }