MAMMOS: MApping Multiple Human MOtion with Scene Understanding and Natural Interactions

Donggeun Lim, Cheongi Jeong, Young Min Kim; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4278-4287

Abstract


We present MAMMOS, an automated framework that generates the motions of multiple humans that naturally interact with each other in a given 3D scene. Many practical VR scenarios require creating dynamic human characters in harmony with the surrounding environment and other people. However, it is hard for an artist to manually generate multiple character motions tailored to the given 3D scene structure, or gather sufficient data to train an automated system that jointly considers the entangled requirements. MAMMOS is a hierarchical framework that successfully handles spatio-temporal constraints and generates high-quality motions. Given a simple tuple of action labels of the desired motion sequence, MAMMOS first places anchors in time and location for characters that avoid collisions yet enable necessary interactions. Then we generate the timelines of individual collision-free paths within the scene and connect them to perform diverse and natural motions. To the best of our knowledge, we are the first to generate long-horizon motion sequences of multiple humans with realistic interactions such that we can automatically populate the 3D scenes.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Lim_2023_ICCV, author = {Lim, Donggeun and Jeong, Cheongi and Kim, Young Min}, title = {MAMMOS: MApping Multiple Human MOtion with Scene Understanding and Natural Interactions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {4278-4287} }