Neural Fields for Co-Reconstructing 3D Objects from Incidental 2D Data

Dylan Campbell, Eldar Insafutdinov, Joao F. Henriques, Andrea Vedaldi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 2883-2893

Abstract


We ask whether 3D objects can be reconstructed from real world data collected for some other purpose such as autonomous driving or augmented reality thus inferring objects only incidentally. 3D reconstruction from incidental data is a major challenge because in addition to significant noise only a few views of each object are observed which are insufficient for reconstruction. We approach this problem as a co-reconstruction task where multiple objects are reconstructed together learning shape and appearance priors for regularization. In order to do so we introduce a neural radiance field that is conditioned via an attention mechanism on the identity of the individual objects. We further disentangle shape from appearance and diffuse color from specular color via an asymmetric two-stream network which factors shared information from instance-specific details. We demonstrate the ability of this method to reconstruct full 3D objects from partial incidental observations in autonomous driving and other datasets.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Campbell_2024_CVPR, author = {Campbell, Dylan and Insafutdinov, Eldar and Henriques, Joao F. and Vedaldi, Andrea}, title = {Neural Fields for Co-Reconstructing 3D Objects from Incidental 2D Data}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {2883-2893} }