DualPM: Dual Posed-Canonical Point Maps for 3D Shape and Pose Reconstruction

Ben Kaye, Tomas Jakab, Shangzhe Wu, Christian Ruprecht, Andrea Vedaldi; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 6425-6435

Abstract


The choice of data representation is a key factor in the success of deep learning in geometric tasks. For instance, DUSt3R has recently introduced the concept of viewpoint- invariant point maps, generalizing depth prediction, and showing that one can reduce all the key problems in the 3D reconstruction of static scenes to predicting such point maps. In this paper, we develop an analogous concept for a very different problem, namely, the reconstruction of the 3D shape and pose of deformable objects. To this end, we introduce the Dual Point Maps (DualPM), where a pair of point maps is extracted from the same image, one associating pixels to their 3D locations on the object, and the other to a canonical version of the object at rest pose. We also extend point maps to amodal reconstruction, seeing through self-occlusions to obtain the complete shape of the object. We show that 3D reconstruction and 3D pose estimation reduce to the prediction of the DualPMs. We demonstrate empirically that this representation is a good target for a deep network to predict; specifically, we consider modeling quadrupeds, showing that DualPMs can be trained purely on 3D synthetic data, consisting of one or two models per category, while generalizing very well to real images. With this, we improve by a large margin previous methods for the 3D analysis and reconstruction of this type of objects.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kaye_2025_CVPR, author = {Kaye, Ben and Jakab, Tomas and Wu, Shangzhe and Ruprecht, Christian and Vedaldi, Andrea}, title = {DualPM: Dual Posed-Canonical Point Maps for 3D Shape and Pose Reconstruction}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {6425-6435} }