Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks

Huogen Wang, Pichao Wang, Zhanjie Song, Wanqing Li; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3129-3137

Abstract


This paper presents the method designed for the 2017 ChaLearn LAP Large-scale Gesture Recognition Challenge. The proposed method converts a video sequence into multiple body level dynamic images and hand level dynamic images as the inputs to Convolutional Neural Networks (ConvNets) respectively through bidirectional rank pooling and adopts Convolutional LSTM Networks (ConvLSTM) to learn long-term spatiotemporal features from short-term spatiotemporal features extracted using a 3D convolutional neural network (3DCNN) at body and hand level. Such a heterogeneous network system learns effectively different levels of spatiotemporal features that are complementary to each other to improve the recognition accuracy largely. The method has been evaluated on the 2017 isolated and continuous ChaLearn LAP Large-scale Gesture Recognition Challenge datasets and the results are ranked among the top performances.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2017_ICCV,
author = {Wang, Huogen and Wang, Pichao and Song, Zhanjie and Li, Wanqing},
title = {Large-Scale Multimodal Gesture Recognition Using Heterogeneous Networks},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}