Action-Conditioned 3D Human Motion Synthesis With Transformer VAE

Mathis Petrovich, Michael J. Black, Gül Varol; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10985-10995

Abstract


We tackle the problem of action-conditioned generation of realistic and diverse human motion sequences. In contrast to methods that complete, or extend, motion sequences, this task does not require an initial pose or sequence. Here we learn an action-aware latent representation for human motions by training a generative variational autoencoder (VAE). By sampling from this latent space and querying a certain duration through a series of positional encodings, we synthesize variable-length motion sequences conditioned on a categorical action. Specifically, we design a Transformer-based architecture, ACTOR, for encoding and decoding a sequence of parametric SMPL human body models estimated from action recognition datasets. We evaluate our approach on the NTU RGB+D, HumanAct12 and UESTC datasets and show improvements over the state of the art. Furthermore, we present two use cases: improving action recognition through adding our synthesized data to training, and motion denoising. Code and models are available on our project page.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Petrovich_2021_ICCV, author = {Petrovich, Mathis and Black, Michael J. and Varol, G\"ul}, title = {Action-Conditioned 3D Human Motion Synthesis With Transformer VAE}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10985-10995} }