Egocentric Pose Estimation From Human Vision Span

Hao Jiang, Vamsi Krishna Ithapu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11006-11014

Abstract


Estimating camera wearer's body pose from an egocentric view (egopose) is a vital task in augmented and virtual reality. Existing approaches either use a narrow field of view front facing camera that barely captures the wearer, or an extended head-mounted top-down camera for maximal wearer visibility. In this paper, we tackle the egopose estimation from a more natural human vision span, where camera wearer can be seen in the peripheral view and depending on the head pose the wearer may become invisible or has a limited partial view. This is a realistic visual field for user-centric wearable devices like glasses which have front facing wide angle cameras. Existing solutions are not appropriate for this setting, and so, we propose a novel deep learning system taking advantage of both the dynamic features from camera SLAM and the body shape imagery. We compute 3D head pose, 3D body pose, the figure/ground separation, all at the same time while explicitly enforcing a certain geometric consistency across pose attributes. We further show that this system can be trained robustly with lots of existing mocap data so we do not have to collect and annotate large new datasets. Lastly, our system estimates egopose in real time and on the fly while maintaining high accuracy.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Jiang_2021_ICCV, author = {Jiang, Hao and Ithapu, Vamsi Krishna}, title = {Egocentric Pose Estimation From Human Vision Span}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11006-11014} }