Perpetual Humanoid Control for Real-time Simulated Avatars

Zhengyi Luo, Jinkun Cao, AlexanderWinkler, Kris Kitani, Weipeng Xu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 10895-10904

Abstract


We present a physics-based humanoid controller that achieves high-fidelity motion imitation and fault-tolerant behavior in the presence of noisy input (e.g. pose estimates from video or generated from language) and unexpected falls. Our controller scales up to learning ten thousand motion clips without using any external stabilizing forces and learns to naturally recover from fail-state. Given reference motion, our controller can perpetually control simulated avatars without requiring resets. At its core, we propose the progressive multiplicative control policy (PMCP), which dynamically allocates new network capacity to learn harder and harder motion sequences. PMCP allows efficient scaling for learning from large-scale motion databases and adding new tasks, such as fail-state recovery, without catastrophic forgetting. We demonstrate the effectiveness of our controller by using it to imitate noisy poses from video-based pose estimators and language-based motion generators in a live and real-time multi-person avatar use case.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Luo_2023_ICCV, author = {Luo, Zhengyi and Cao, Jinkun and AlexanderWinkler and Kitani, Kris and Xu, Weipeng}, title = {Perpetual Humanoid Control for Real-time Simulated Avatars}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {10895-10904} }