Bayesian Graph Convolution LSTM for Skeleton Based Action Recognition

Rui Zhao, Kang Wang, Hui Su, Qiang Ji; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 6882-6892


We propose a framework for recognizing human actions from skeleton data by modeling the underlying dynamic process that generates the motion pattern. We capture three major factors that contribute to the complexity of the motion pattern including spatial dependencies among body joints, temporal dependencies of body poses, and variation among subjects in action execution. We utilize graph convolution to extract structure-aware feature representation from pose data by exploiting the skeleton anatomy. Long short-term memory (LSTM) network is then used to capture the temporal dynamics of the data. Finally, the whole model is extended under the Bayesian framework to a probabilistic model in order to better capture the stochasticity and variation in the data. An adversarial prior is developed to regularize the model parameters to improve the generalization of the model. A Bayesian inference problem is formulated to solve the classification task. We demonstrate the benefit of this framework in several benchmark datasets with recognition under various generalization conditions.

Related Material

author = {Zhao, Rui and Wang, Kang and Su, Hui and Ji, Qiang},
title = {Bayesian Graph Convolution LSTM for Skeleton Based Action Recognition},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}