LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment

Yiming Ren, Xiao Han, Chengfeng Zhao, Jingya Wang, Lan Xu, Jingyi Yu, Yuexin Ma; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 1281-1291

Abstract


For human-centric large-scale scenes fine-grained modeling for 3D human global pose and shape is significant for scene understanding and can benefit many real-world applications. In this paper we present LiveHPS a novel single-LiDAR-based approach for scene-level human pose and shape estimation without any limitation of light conditions and wearable devices. In particular we design a distillation mechanism to mitigate the distribution-varying effect of LiDAR point clouds and exploit the temporal-spatial geometric and dynamic information existing in consecutive frames to solve the occlusion and noise disturbance. LiveHPS with its efficient configuration and high-quality output is well-suited for real-world applications. Moreover we propose a huge human motion dataset named FreeMotion which is collected in various scenarios with diverse human poses shapes and translations. It consists of multi-modal and multi-view acquisition data from calibrated and synchronized LiDARs cameras and IMUs. Extensive experiments on our new dataset and other public datasets demonstrate the SOTA performance and robustness of our approach. We will release our code and dataset soon.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Ren_2024_CVPR, author = {Ren, Yiming and Han, Xiao and Zhao, Chengfeng and Wang, Jingya and Xu, Lan and Yu, Jingyi and Ma, Yuexin}, title = {LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free Environment}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1281-1291} }