-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Pinyoanuntapong_2024_CVPR, author = {Pinyoanuntapong, Ekkasit and Wang, Pu and Lee, Minwoo and Chen, Chen}, title = {MMM: Generative Masked Motion Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1546-1555} }
MMM: Generative Masked Motion Model
Abstract
Recent advances in text-to-motion generation using diffusion and autoregressive models have shown promising results. However these models often suffer from a trade-off between real-time performance high fidelity and motion editability. To address this gap we introduce MMM a novel yet simple motion generation paradigm based on Masked Motion Model. MMM consists of two key components: (1) a motion tokenizer that transforms 3D human motion into a sequence of discrete tokens in latent space and (2) a conditional masked motion transformer that learns to predict randomly masked motion tokens conditioned on the pre-computed text tokens. By attending to motion and text tokens in all directions MMM explicitly captures inherent dependency among motion tokens and semantic mapping between motion and text tokens. During inference this allows parallel and iterative decoding of multiple motion tokens that are highly consistent with fine-grained text descriptions therefore simultaneously achieving high-fidelity and high-speed motion generation. In addition MMM has innate motion editability. By simply placing mask tokens in the place that needs editing MMM automatically fills the gaps while guaranteeing smooth transitions between editing and non-editing parts. Extensive experiments on the HumanML3D and KIT-ML datasets demonstrate that MMM surpasses current leading methods in generating high-quality motion (evidenced by superior FID scores of 0.08 and 0.429) while offering advanced editing features such as body-part modification motion in-betweening and the synthesis of long motion sequences. In addition MMM is two orders of magnitude faster on a single mid-range GPU than editable motion diffusion models. Our project page is available at https://exitudio.github.io/MMM-page/.
Related Material