-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zheng_2025_WACV, author = {Zheng, Ce and Liu, Xianpeng and Peng, Qucheng and Wu, Tianfu and Wang, Pu and Chen, Chen}, title = {DiffMesh: A Motion-Aware Diffusion Framework for Human Mesh Recovery from Videos}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4891-4901} }
DiffMesh: A Motion-Aware Diffusion Framework for Human Mesh Recovery from Videos
Abstract
Human mesh recovery (HMR) provides rich human body information for various real-world applications such as gaming human-computer interaction and virtual reality. While image-based HMR methods have achieved impressive results they often struggle to recover humans in dynamic scenarios leading to temporal inconsistencies and non-smooth 3D motion predictions due to the absence of human motion. In contrast video-based approaches leverage temporal information to mitigate this issue. In this paper we present DiffMesh an innovative motion-aware diffusion framework for video-based HMR. DiffMesh establishes a bridge between diffusion models and human motion efficiently generating accurate and smooth output mesh sequences by incorporating human motion within the forward process and reverse process in the diffusion model. Extensive experiments are conducted on the widely used datasets (Human3.6M and 3DPW) which demonstrate the effectiveness and efficiency of our DiffMesh. Visual comparisons in real-world scenarios further highlight DiffMesh's suitability for practical applications. The project webpage is: https://zczcwh.github.io/ diffmesh_page/
Related Material