MV-SSM: Multi-View State Space Modeling for 3D Human Pose Estimation

Aviral Chharia, Wenbo Gou, Haoye Dong; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 11590-11599

Abstract


While significant progress has been made in single-view 3D human pose estimation, multi-view 3D human pose estimation remains challenging, particularly in terms of generalizing to new camera configurations. Existing attention-based transformers often struggle to accurately model the spatial arrangement of keypoints, especially in occluded scenarios. Additionally, they tend to overfit specific camera arrangements and visual scenes from training data, resulting in substantial performance drops in new settings. In this study, we introduce a novel Multi-View State Space Modeling framework, named MV-SSM, for robustly estimating 3D human keypoints. We explicitly model the joint spatial sequence at two distinct levels: the feature level from multi-view images and the person keypoint level. We propose a Projective State Space (PSS) block to learn a generalized representation of joint spatial arrangements using state space modeling. Moreover, we modify Mamba's traditional scanning into an effective Grid Token-guided Bidirectional Scanning (GTBS), which is integral to the PSS block. Multiple experiments demonstrate that MV-SSM achieves strong generalization, outperforming state-of-the-art methods: +10.8 on AP25 on the challenging three-camera setting in CMU Panoptic, +7.0 on AP25 on varying camera arrangements, and +15.3 PCP on Campus A1 in cross-dataset evaluations. Project Website: https://aviralchharia.github.io/MV-SSM

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Chharia_2025_CVPR, author = {Chharia, Aviral and Gou, Wenbo and Dong, Haoye}, title = {MV-SSM: Multi-View State Space Modeling for 3D Human Pose Estimation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {11590-11599} }