Skeleton-Based Action Recognition With Directed Graph Neural Networks

Lei Shi, Yifan Zhang, Jian Cheng, Hanqing Lu; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7912-7921

Abstract


The skeleton data have been widely used for the action recognition tasks since they can robustly accommodate dynamic circumstances and complex backgrounds. In existing methods, both the joint and bone information in skeleton data have been proved to be of great help for action recognition tasks. However, how to incorporate these two types of data to best take advantage of the relationship between joints and bones remains a problem to be solved. In this work, we represent the skeleton data as a directed acyclic graph based on the kinematic dependency between the joints and bones in the natural human body. A novel directed graph neural network is designed specially to extract the information of joints, bones and their relations and make prediction based on the extracted features. In addition, to better fit the action recognition task, the topological structure of the graph is made adaptive based on the training process, which brings notable improvement. Moreover, the motion information of the skeleton sequence is exploited and combined with the spatial information to further enhance the performance in a two-stream framework. Our final model is tested on two large-scale datasets, NTU-RGBD and Skeleton-Kinetics, and exceeds state-of-the-art performance on both of them.

Related Material


[pdf]
[bibtex]
@InProceedings{Shi_2019_CVPR,
author = {Shi, Lei and Zhang, Yifan and Cheng, Jian and Lu, Hanqing},
title = {Skeleton-Based Action Recognition With Directed Graph Neural Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}