Deep Learning of Mouth Shapes for Sign Language

Oscar Koller, Hermann Ney, Richard Bowden; Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops, 2015, pp. 85-91

Abstract


This paper deals with robust modelling of mouth shapes in the context of sign language recognition using deep convolutional neural networks. Sign language mouth shapes are difficult to annotate and thus hardly any publicly available annotations exist. As such, this work exploits related information sources as weak supervision. Humans mainly look at the face during sign language communication, where mouth shapes play an important role and constitute natural patterns with large variability. However, most scientific research on sign language recognition still disregards the face. Hardly any works explicitly focus on mouth shapes. This paper presents our advances in the field of sign language recognition. We contribute in following areas: We present a scheme to learn a convolutional neural network in a weakly supervised fashion without explicit frame labels. We propose a way to incorporate neural network classifier outputs into a HMM approach. Finally, we achieve a significant improvement in classification performance of mouth shapes over the current state of the art.

Related Material


[pdf]
[bibtex]
@InProceedings{Koller_2015_ICCV_Workshops,
author = {Koller, Oscar and Ney, Hermann and Bowden, Richard},
title = {Deep Learning of Mouth Shapes for Sign Language},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {December},
year = {2015}
}