Three Dimensional Motion Trail Model for Gesture Recognition

Bin Liang, Lihong Zheng; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2013, pp. 684-691

Abstract


In this paper an effective method is presented to recognize human gestures from sequences of depth images. Specifically, we propose a three dimensional motion trail model (3D-MTM) to explicitly represent the dynamics and statics of gestures in 3D space. In 2D space, the motion trail model (2D-MTM) consists of both motion information and static posture information over the gesture sequence along the xoy-plane. Considering gestures are performed in 3D space, depth images are projected onto two other planes to encode additional gesture information. The 2D-MTM is then extensively combined with complementary motion information from additional two planes to generate the 3DMTM. Furthermore, the Histogram of Oriented Gradient (HOG) feature vector is extracted from the proposed 3DMTM as the representation of a gesture sequence. The experiment results show that the proposed method achieves better results on two publicly available datasets namely MSR Action3D dataset and ChaLearn gesture dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Liang_2013_ICCV_Workshops,
author = {Bin Liang and Lihong Zheng},
title = {Three Dimensional Motion Trail Model for Gesture Recognition},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {June},
year = {2013}
}