Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation

Samaneh Azadi, Akbar Shah, Thomas Hayes, Devi Parikh, Sonal Gupta; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 15039-15048


Text-guided human motion generation has drawn significant interest because of its impactful applications spanning animation and robotics. Recently, application of diffusion models for motion generation has enabled improvements in the quality of generated motions. However, existing approaches are limited by their reliance on relatively small-scale motion capture data, leading to poor performance on more diverse, in-the-wild prompts. In this paper, we introduce Make-An-Animation, a text-conditioned human motion generation model which learns more diverse poses and prompts from large-scale image-text datasets, enabling significant improvement in performance over prior works. Make-An-Animation is trained in two stages. First, we train on a curated large-scale dataset of (text, static pseudo-pose) pairs extracted from image-text datasets. Second, we fine-tune on motion capture data, adding additional layers to model the temporal dimension. Unlike prior diffusion models for motion generation, Make-An-Animation uses a U-Net architecture similar to recent text-to-video generation models. Human evaluation of motion realism and alignment with input text shows that our model reaches state-of-the-art performance on text-to-motion generation.

Related Material

@InProceedings{Azadi_2023_ICCV, author = {Azadi, Samaneh and Shah, Akbar and Hayes, Thomas and Parikh, Devi and Gupta, Sonal}, title = {Make-An-Animation: Large-Scale Text-conditional 3D Human Motion Generation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {15039-15048} }