Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions

Yijun Qian, Jack Urbanek, Alexander G. Hauptmann, Jungdam Won; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 2306-2316

Abstract


Given its wide applications, there is increasing focus on generating 3D human motions from textual descriptions. Differing from the majority of previous works, which regard actions as single entities and can only generate short sequences for simple motions, we propose EMS, an elaborative motion synthesis model conditioned on detailed natural language descriptions. It generates natural and smooth motion sequences for long and complicated actions by factorizing them into groups of atomic actions. Meanwhile, it understands atomic-action level attributes (e.g., motion direction, speed, and body parts) and enables users to generate sequences of unseen complex actions from unique sequences of known atomic actions with independent attribute settings and timings applied. We evaluate our method on the KIT Motion-Language and BABEL benchmarks, where it outperforms all previous state-of-the-art with noticeable margins.

Related Material


[pdf]
[bibtex]
@InProceedings{Qian_2023_ICCV, author = {Qian, Yijun and Urbanek, Jack and Hauptmann, Alexander G. and Won, Jungdam}, title = {Breaking The Limits of Text-conditioned 3D Motion Synthesis with Elaborative Descriptions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {2306-2316} }