3D Human Pose Estimation in Video With Temporal Convolutions and Semi-Supervised Training

Dario Pavllo, Christoph Feichtenhofer, David Grangier, Michael Auli; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 7753-7762

Abstract


In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semi-supervised settings where labeled data is scarce. Code and models are available at https://github.com/facebookresearch/VideoPose3D

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Pavllo_2019_CVPR,
author = {Pavllo, Dario and Feichtenhofer, Christoph and Grangier, David and Auli, Michael},
title = {3D Human Pose Estimation in Video With Temporal Convolutions and Semi-Supervised Training},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}