Self-Supervised Transfer Learning for Hand Mesh Recovery From Binocular Images

Zheng Chen, Sihan Wang, Yi Sun, Xiaohong Ma; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 11626-11634

Abstract


Traditional methods for RGB hand mesh recovery usually need to train a separate model for each dataset with the corresponding ground truth and are hardly adapted to new scenarios without the ground truth for supervision. To address the problem, we propose a self-supervised framework for hand mesh estimation, where we pre-learn hand priors from existing hand datasets and transfer the priors to new scenarios without any landmark annotations. The proposed approach takes binocular images as input and mainly relies on left-right consistency constraints including appearance consensus and shape consistency to train the model to estimate the hand mesh in new scenarios. We conduct experiments on the widely used stereo hand dataset, and the experimental results verify that our model can get comparable performance compared with state-of-the-art methods even without the corresponding landmark annotations. To further evaluate our model, we collect a large real binocular dataset. The experimental results on the collected real dataset also verify the effectiveness of our model qualitatively.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chen_2021_ICCV, author = {Chen, Zheng and Wang, Sihan and Sun, Yi and Ma, Xiaohong}, title = {Self-Supervised Transfer Learning for Hand Mesh Recovery From Binocular Images}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {11626-11634} }