Aligning Latent Spaces for 3D Hand Pose Estimation

Linlin Yang, Shile Li, Dongheui Lee, Angela Yao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 2335-2343

Abstract


Hand pose estimation from monocular RGB inputs is a highly challenging task. Many previous works for monocular settings only used RGB information for training despite the availability of corresponding data in other modalities such as depth maps. In this work, we propose to learn a joint latent representation that leverages other modalities as weak labels to boost the RGB-based hand pose estimator. By design, our architecture is highly flexible in embedding various diverse modalities such as heat maps, depth maps and point clouds. In particular, we find that encoding and decoding the point cloud of the hand surface can improve the quality of the joint latent representation. Experiments show that with the aid of other modalities during training, our proposed method boosts the accuracy of RGB-based hand pose estimation systems and significantly outperforms state-of-the-art on two public benchmarks.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Yang_2019_ICCV,
author = {Yang, Linlin and Li, Shile and Lee, Dongheui and Yao, Angela},
title = {Aligning Latent Spaces for 3D Hand Pose Estimation},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}