TEMPO: Efficient Multi-View Pose Estimation, Tracking, and Forecasting

Rohan Choudhury, Kris M. Kitani, László A. Jeni; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 14750-14760

Abstract


Existing volumetric methods for predicting 3D human pose estimation are accurate, but computationally expensive and optimized for single time-step prediction. We present TEMPO, an efficient multi-view pose estimation model that learns a robust spatiotemporal representation, improving pose accuracy while also tracking and forecasting human pose. We significantly reduce computation compared to the state-of-the-art by recurrently computing per-person 2D pose features, fusing both spatial and temporal information into a single representation. In doing so, our model is able to use spatiotemporal context to predict more accurate human poses without sacrificing efficiency. We further use this representation to track human poses over time as well as predict future poses. Finally, we demonstrate that our model is able to generalize across datasets without scene-specific fine-tuning. TEMPO achieves 10% better MPJPE with a 33x improvement in FPS compared to TesseTrack on the challenging CMU Panoptic Studio dataset. Our code and demos are available at https://rccchoudhury.github.io/tempo2023.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Choudhury_2023_ICCV, author = {Choudhury, Rohan and Kitani, Kris M. and Jeni, L\'aszl\'o A.}, title = {TEMPO: Efficient Multi-View Pose Estimation, Tracking, and Forecasting}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {14750-14760} }