HandDiff: 3D Hand Pose Estimation with Diffusion on Image-Point Cloud

Wencan Cheng, Hao Tang, Luc Van Gool, Jong Hwan Ko; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 2274-2284

Abstract


Extracting keypoint locations from input hand frames known as 3D hand pose estimation is a critical task in various human-computer interaction applications. Essentially the 3D hand pose estimation can be regarded as a 3D point subset generative problem conditioned on input frames. Thanks to the recent significant progress on diffusion-based generative models hand pose estimation can also benefit from the diffusion model to estimate keypoint locations with high quality. However directly deploying the existing diffusion models to solve hand pose estimation is non-trivial since they cannot achieve the complex permutation mapping and precise localization. Based on this motivation this paper proposes HandDiff a diffusion-based hand pose estimation model that iteratively denoises accurate hand pose conditioned on hand-shaped image-point clouds. In order to recover keypoint permutation and accurate location we further introduce joint-wise condition and local detail condition. Experimental results demonstrate that the proposed HandDiff significantly outperforms the existing approaches on four challenging hand pose benchmark datasets. Codes and pre-trained models are publicly available at https://github.com/cwc1260/HandDiff.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Cheng_2024_CVPR, author = {Cheng, Wencan and Tang, Hao and Van Gool, Luc and Ko, Jong Hwan}, title = {HandDiff: 3D Hand Pose Estimation with Diffusion on Image-Point Cloud}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {2274-2284} }