American Sign Language Alphabet Recognition Using Microsoft Kinect

Cao Dong, Ming C. Leu, Zhaozheng Yin; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2015, pp. 44-52

Abstract


American Sign Language (ASL) alphabet recognition using marker-less vision sensors is a challenging task due to the complexity of ASL alphabet signs, self-occlusion of the hand, and limited resolution of the sensors. This paper describes a new method for ASL alphabet recognition using a low-cost depth camera, which is Microsoft's Kinect. A segmented hand configuration is first obtained by using a depth contrast feature based per-pixel classification algorithm. Then, a hierarchical mode-seeking method is developed and implemented to localize hand joint positions under kinematic constraints. Finally, a Random Forest (RF) classifier is built to recognize ASL signs using the joint angles. To validate the performance of this method, we used a publicly available dataset from Surrey University. The results have shown that our method can achieve above 90% accuracy in recognizing 24 static ASL alphabet signs, which is significantly higher in comparison to the previous benchmarks.

Related Material


[pdf]
[bibtex]
@InProceedings{Dong_2015_CVPR_Workshops,
author = {Dong, Cao and Leu, Ming C. and Yin, Zhaozheng},
title = {American Sign Language Alphabet Recognition Using Microsoft Kinect},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2015}
}