Two-Hand Global 3D Pose Estimation Using Monocular RGB

Fanqing Lin, Connor Wilhelm, Tony Martinez; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 2373-2381

Abstract


We tackle the challenging task of estimating global 3D joint locations for both hands via only monocular RGB input images. We propose a novel multi-stage convolutional neural network based pipeline that accurately segments and locates the hands despite occlusion between two hands and complex background noise and estimates the 2D and 3D canonical joint locations without any depth information. Global joint locations with respect to the camera origin are computed using the hand pose estimations and the actual length of the key bone with a novel projection algorithm. To train the CNNs for this new task, we introduce a large-scale synthetic 3D hand pose dataset. We demonstrate that our system outperforms previous works on 3D canonical hand pose estimation benchmark datasets with RGB-only information. Additionally, we present the first work that achieves accurate global 3D hand tracking on both hands using RGB-only inputs and provide extensive quantitative and qualitative evaluation.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Lin_2021_WACV, author = {Lin, Fanqing and Wilhelm, Connor and Martinez, Tony}, title = {Two-Hand Global 3D Pose Estimation Using Monocular RGB}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {2373-2381} }