Contact-Aware Retargeting of Skinned Motion

Ruben Villegas, Duygu Ceylan, Aaron Hertzmann, Jimei Yang, Jun Saito; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9720-9729

Abstract


This paper introduces a motion retargeting method that preserves self-contacts and prevents inter-penetration. Self-contacts, such as when hands touch each other or the torso or the head, are important attributes of human body language and dynamics, yet existing methods do not model or preserve these contacts. Likewise, self-penetrations, such as a hand passing into the torso, are a typical artifact of motion estimation methods. The input to our method is a human motion sequence and a target skeleton and character geometry. The method identifies self-contacts and ground contacts in the input motion, and optimizes the motion to apply to the output skeleton, while preserving these contacts and reducing self-penetrations. We introduce a novel geometry-conditioned recurrent network with an encoder-space optimization strategy that achieves efficient retargeting while satisfying contact constraints. In experiments, our results quantitatively outperform previous methods and in the user study our retargeted motions are rated as higher-quality than those produced by recent works. We also show our method generalizes to motion estimated from human videos where we improve over previous works that produce noticeable interpenetration.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Villegas_2021_ICCV, author = {Villegas, Ruben and Ceylan, Duygu and Hertzmann, Aaron and Yang, Jimei and Saito, Jun}, title = {Contact-Aware Retargeting of Skinned Motion}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {9720-9729} }