3D Human Pose Perception from Egocentric Stereo Videos

Hiroyasu Akada, Jian Wang, Vladislav Golyanik, Christian Theobalt; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 767-776

Abstract


While head-mounted devices are becoming more compact they provide egocentric views with significant self-occlusions of the device user. Hence existing methods often fail to accurately estimate complex 3D poses from egocentric views. In this work we propose a new transformer-based framework to improve egocentric stereo 3D human pose estimation which leverages the scene information and temporal context of egocentric stereo videos. Specifically we utilize 1) depth features from our 3D scene reconstruction module with uniformly sampled windows of egocentric stereo frames and 2) human joint queries enhanced by temporal features of the video inputs. Our method is able to accurately estimate human poses even in challenging scenarios such as crouching and sitting. Furthermore we introduce two new benchmark datasets i.e. UnrealEgo2 and UnrealEgo-RW (RealWorld). UnrealEgo2 is a large-scale in-the-wild dataset captured in synthetic 3D scenes. UnrealEgo-RW is a real-world dataset captured with our newly developed device. The proposed datasets offer a much larger number of egocentric stereo views with a wider variety of human motions than the existing datasets allowing comprehensive evaluation of existing and upcoming methods. Our extensive experiments show that the proposed approach significantly outperforms previous methods. UnrealEgo2 UnrealEgo-RW and trained models are available on our project page and Benchmark Challenge.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Akada_2024_CVPR, author = {Akada, Hiroyasu and Wang, Jian and Golyanik, Vladislav and Theobalt, Christian}, title = {3D Human Pose Perception from Egocentric Stereo Videos}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {767-776} }