Spatial-Temporal Attention Res-TCN for Skeleton-based Dynamic Hand Gesture Recognition

Jingxuan Hou, Guijin Wang, Xinghao Chen, Jing-Hao Xue, Rui Zhu, Huazhong Yang; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Dynamic hand gesture recognition is a crucial yet challenging task in computer vision. The key of this task lies in an effective extraction of discriminative spatial and temporal features to model the evolutions of different gestures. In this paper, we propose an end-to-end Spatial-Temporal Attention Residual Temporal Convolutional Network (STA-Res-TCN) for skeleton-based dynamic hand gesture recognition, which learns different levels of attention and assigns them to each spatialtemporal feature extracted by the convolution filters at each time step. The proposed attention branch assists the networks to adaptively focus on the informative time frames and features while exclude the irrelevant ones that often bring in unnecessary noise. Moreover, our proposed STA-Res-TCN is a lightweight model that can be trained and tested in an extremely short time. Experiments on DHG-14/28 Dataset and SHREC’17 Track Dataset show that STA-Res-TCN outperforms stateof-the-art methods on both the 14 gestures setting and the more complicated 28 gestures setting.

Related Material


[pdf]
[bibtex]
@InProceedings{Hou_2018_ECCV_Workshops,
author = {Hou, Jingxuan and Wang, Guijin and Chen, Xinghao and Xue, Jing-Hao and Zhu, Rui and Yang, Huazhong},
title = {Spatial-Temporal Attention Res-TCN for Skeleton-based Dynamic Hand Gesture Recognition},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}