-
[pdf]
[arXiv]
[bibtex]@InProceedings{Lou_2024_CVPR, author = {Lou, Zhenyu and Cui, Qiongjie and Wang, Haofan and Tang, Xu and Zhou, Hong}, title = {Multimodal Sense-Informed Forecasting of 3D Human Motions}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {2144-2154} }
Multimodal Sense-Informed Forecasting of 3D Human Motions
Abstract
Predicting future human pose is a fundamental application for machine intelligence which drives robots to plan their behavior and paths ahead of time to seamlessly accomplish human-robot collaboration in real-world 3D scenarios. Despite encouraging results existing approaches rarely consider the effects of the external scene on the motion sequence leading to pronounced artifacts and physical implausibilities in the predictions. To address this limitation this work introduces a novel multi-modal sense-informed motion prediction approach which conditions high-fidelity generation on two modal information: external 3D scene and internal human gaze and is able to recognize their salience for future human activity. Furthermore the gaze information is regarded as the human intention and combined with both motion and scene features we construct a ternary intention-aware attention to supervise the generation to match where the human wants to reach. Meanwhile we introduce semantic coherence-aware attention to explicitly distinguish the salient point clouds and the underlying ones to ensure a reasonable interaction of the generated sequence with the 3D scene. On two real-world benchmarks the proposed method achieves state-of-the-art performance both in 3D human pose and trajectory prediction. More detailed results are available on the page: https://sites.google.com/view/cvpr2024sif3d.
Related Material