Mutual Support of Data Modalities in the Task of Sign Language Recognition

Ivan Gruber, Zdenek Krnoul, Marek Hruz, Jakub Kanis, Matyas Bohacek; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 3424-3433

Abstract


This paper presents a method for automatic sign language recognition that was utilized in the CVPR 2021 ChaLearn Challenge (RGB track). Our method is composed of several approaches combined in an ensemble scheme to perform isolated sign-gesture recognition. We combine modalities of video sample frames processed by a 3D ConvNet (I3D), with body-pose information in the form of joint locations processed by a Transformer, hand region images transformed into a semantic space, and linguistically defined locations of hands. Although the individual models perform sub-par (60% to 93% accuracy on validation data), the weighted ensemble results in 95.46% accuracy.

Related Material


[pdf]
[bibtex]
@InProceedings{Gruber_2021_CVPR, author = {Gruber, Ivan and Krnoul, Zdenek and Hruz, Marek and Kanis, Jakub and Bohacek, Matyas}, title = {Mutual Support of Data Modalities in the Task of Sign Language Recognition}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3424-3433} }