PhysPT: Physics-aware Pretrained Transformer for Estimating Human Dynamics from Monocular Videos

Yufei Zhang, Jeffrey O. Kephart, Zijun Cui, Qiang Ji; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 2305-2317

Abstract


While current methods have shown promising progress on estimating 3D human motion from monocular videos their motion estimates are often physically unrealistic because they mainly consider kinematics. In this paper we introduce Physics-aware Pretrained Transformer (PhysPT) which improves kinematics-based motion estimates and infers motion forces. PhysPT exploits a Transformer encoder-decoder backbone to effectively learn human dynamics in a self-supervised manner. Moreover it incorporates physics principles governing human motion. Specifically we build a physics-based body representation and contact force model. We leverage them to impose novel physics-inspired training losses (i.e. force loss contact loss and Euler-Lagrange loss) enabling PhysPT to capture physical properties of the human body and the forces it experiences. Experiments demonstrate that once trained PhysPT can be directly applied to kinematics-based estimates to significantly enhance their physical plausibility and generate favourable motion forces. Furthermore we show that these physically meaningful quantities translate into improved accuracy of an important downstream task: human action recognition.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zhang_2024_CVPR, author = {Zhang, Yufei and Kephart, Jeffrey O. and Cui, Zijun and Ji, Qiang}, title = {PhysPT: Physics-aware Pretrained Transformer for Estimating Human Dynamics from Monocular Videos}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {2305-2317} }