Pose Priors from Language Models

Sanjay Subramanian, Evonne Ng, Lea Müller, Dan Klein, Shiry Ginosar, Trevor Darrell; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 7125-7135

Abstract


Language is often used to describe physical interaction, yet most 3D human pose estimation methods overlook this rich source of information. We bridge this gap by leveraging large multimodal models (LMMs) as priors for reconstructing contact poses, offering a scalable alternative to traditional methods that rely on human annotations or motion capture data. Our approach extracts contact-relevant descriptors from an LMM and translates them into tractable losses to constrain 3D human pose optimization. Despite its simplicity, our method produces compelling reconstructions for both two-person interactions and self-contact scenarios, accurately capturing the semantics of physical and social interactions. Our results demonstrate that LMMs can serve as powerful tools for contact prediction and pose estimation, offering an alternative to costly manual human annotations or motion capture data. Our code is publicly available at prosepose.github.io.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Subramanian_2025_CVPR, author = {Subramanian, Sanjay and Ng, Evonne and M\"uller, Lea and Klein, Dan and Ginosar, Shiry and Darrell, Trevor}, title = {Pose Priors from Language Models}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {7125-7135} }