Learning an Efficient Model of Hand Shape Variation From Depth Images

Sameh Khamis, Jonathan Taylor, Jamie Shotton, Cem Keskin, Shahram Izadi, Andrew Fitzgibbon; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 2540-2548

Abstract


We describe how to learn a compact and efficient model of the surface deformation of human hands. The model is built from a set of noisy depth images of a diverse set of subjects performing different poses with their hands. We represent the observed surface using Loop subdivision of a control mesh that is deformed by our learned parametric shape and pose model. The model simultaneously accounts for variation in subject-specific shape and subject-agnostic pose. Specifically, hand shape is parameterized as a linear combination of a mean mesh in a neutral pose with a small number of offset vectors. This mesh is then articulated using standard linear blend skinning (LBS) to generate the control mesh of a subdivision surface. We define an energy that encourages each depth pixel to be explained by our model, and the use of a smooth subdivision surface allows us to optimize for all parameters jointly from a rough initialization. The efficacy of our method is demonstrated using both synthetic and real data, where it is shown that hand shape variation can be represented using only a small number of basis directions. We compare with other approaches including PCA and show a substantial improvement in the representation power of our model, while maintaining the efficiency of a linear shape basis.

Related Material


[pdf]
[bibtex]
@InProceedings{Khamis_2015_CVPR,
author = {Khamis, Sameh and Taylor, Jonathan and Shotton, Jamie and Keskin, Cem and Izadi, Shahram and Fitzgibbon, Andrew},
title = {Learning an Efficient Model of Hand Shape Variation From Depth Images},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2015}
}