Explicit Spatiotemporal Joint Relation Learning for Tracking Human Pose

Xiao Sun, Chuankang Li, Stephen Lin; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


We present a method for human pose tracking that is based on learning spatiotemporal relationships among joints. Beyond generating the heatmap of a joint in a given frame, our system also learns to predict the offset of the joint from a neighboring joint in the frame. Additionally, it is trained to predict the displacement of the joint from its position in the previous frame, in a manner that can account for possibly changing joint appearance, unlike optical flow. These relational cues in the spatial domain and temporal domain are inferred in a robust manner by attending only to relevant areas in the video frames. By explicitly learning and exploiting these joint relationships, our system achieves state-of-the-art performance on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.

Related Material


[pdf]
[bibtex]
@InProceedings{Sun_2019_ICCV,
author = {Sun, Xiao and Li, Chuankang and Lin, Stephen},
title = {Explicit Spatiotemporal Joint Relation Learning for Tracking Human Pose},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}