Unbiased 4D: Monocular 4D Reconstruction With a Neural Deformation Model

Erik C.M. Johnson, Marc Habermann, Soshi Shimada, Vladislav Golyanik, Christian Theobalt; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 6598-6607

Abstract


Capturing general deforming scenes is crucial for many applications in computer graphics and vision, and it is especially challenging when only a monocular RGB video of the scene is available. Competing methods assume dense point tracks over the input views, 3D templates, large-scale training datasets, or only capture small-scale deformations. In stark contrast to those, our method makes none of these assumptions while significantly outperforming the previous state of the art in challenging scenarios. Moreover, our technique includes two new--in the context of non-rigid 3D reconstruction--components, i.e., 1) A coordinate-based and implicit neural representation for non-rigid scenes, which enables an unbiased reconstruction of dynamic scenes, and 2) A novel dynamic scene flow loss, which enables the reconstruction of larger deformations. Results on our new dataset, which will be made publicly available, demonstrate the clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Johnson_2023_CVPR, author = {Johnson, Erik C.M. and Habermann, Marc and Shimada, Soshi and Golyanik, Vladislav and Theobalt, Christian}, title = {Unbiased 4D: Monocular 4D Reconstruction With a Neural Deformation Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {6598-6607} }