Re-Sign: Re-Aligned End-To-End Sequence Modelling With Deep Recurrent CNN-HMMs

Oscar Koller, Sepehr Zargaran, Hermann Ney; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4297-4305

Abstract


This work presents an iterative re-alignment approach applicable to visual sequence labelling tasks such as gesture recognition, activity recognition and continuous sign language recognition. Previous methods dealing with video data usually rely on given frame labels to train their classifiers. However, looking at recent data sets, these labels often tend to be noisy which is commonly overseen. We propose an algorithm that treats the provided training labels as weak labels and refines the label-to-image alignment on-the-fly in a weakly supervised fashion. Given a series of frames and sequence-level labels, a deep recurrent CNN-BLSTM network is trained end-to-end. Embedded into an HMM the resulting deep model corrects the frame labels and continuously improves its performance in several re-alignments. We evaluate on two challenging publicly available sign recognition benchmark data sets featuring over 1000 classes. We outperform the state-of-the-art by up to 10% absolute and 30% relative.

Related Material


[pdf]
[bibtex]
@InProceedings{Koller_2017_CVPR,
author = {Koller, Oscar and Zargaran, Sepehr and Ney, Hermann},
title = {Re-Sign: Re-Aligned End-To-End Sequence Modelling With Deep Recurrent CNN-HMMs},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}