Towards Real-time Sign Language Interpreting Robot: Evaluation of Non-manual Components on Recognition Accuracy

Arman Sabyrov, Medet Mukushev, Vadim Kimmelman; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 75-82

Abstract


The purpose of this work is to develop a human-robot interaction system that could be used as a sign language interpreter. The paper presents the results of the ongoing work, which aims to recognize sign language in real time. The motivation behind this work lies in the need to differentiate between similar signs that differ in non-manual components present in any sign. To this end, we recorded 2000 videos of twenty frequently used signs in Kazakh-Russian Sign Language (K-RSL), which have similar manual components but differ in non-manual components (i.e. facial expressions, eyebrow height, mouth, and head orientation). We conducted a series of evaluations in order to investigate whether non-manual components would improve signOs recognition accuracy. Among standard machine learning approaches, Logistic Regression produced the best results, 73% of accuracy for dataset with 20 signs and 80.25% of accuracy for dataset with 2 classes (statement vs question).

Related Material


[pdf]
[bibtex]
@InProceedings{Sabyrov_2019_CVPR_Workshops,
author = {Sabyrov, Arman and Mukushev, Medet and Kimmelman, Vadim},
title = {Towards Real-time Sign Language Interpreting Robot: Evaluation of Non-manual Components on Recognition Accuracy},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}