MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive Learning

Jiaze Sun, Zhixiang Chen, Tae-Kyun Kim; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 14452-14462

Abstract


3D pose transfer is a challenging generation task that aims to transfer the pose of a source geometry onto a target geometry with the target identity preserved. Many prior methods require keypoint annotations to find correspondence between the source and target. Current pose transfer methods allow end-to-end correspondence learning but require the desired final output as ground truth for supervision. Unsupervised methods have been proposed for graph convolutional models but they require ground truth correspondence between the source and target inputs. We present a novel self-supervised framework for 3D pose transfer which can be trained in unsupervised, semi-supervised, or fully supervised settings without any correspondence labels. We introduce two contrastive learning constraints in the latent space: a mesh-level loss for disentangling global patterns including pose and identity, and a point-level loss for discriminating local semantics. We demonstrate quantitatively and qualitatively that our method achieves state-of-the-art results in supervised 3D pose transfer, with comparable results in unsupervised and semi-supervised settings. Our method is also generalisable to unseen human and animal data with complex topologies.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Sun_2023_ICCV, author = {Sun, Jiaze and Chen, Zhixiang and Kim, Tae-Kyun}, title = {MAPConNet: Self-supervised 3D Pose Transfer with Mesh and Point Contrastive Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {14452-14462} }