Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation

John Yang, Yash Bhalgat, Simyung Chang, Fatih Porikli, Nojun Kwak; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 1869-1879

Abstract


While hand pose estimation is a critical component of most interactive extended reality and gesture recognition systems, contemporary approaches are not optimized for computational and memory efficiency. In this paper, we propose a tiny deep neural network of which partial layers are recursively exploited for refining its previous estimations. During its iterative refinements, we employ learned gating criteria to decide whether to exit from the weight-sharing loop, allowing per-sample adaptation in our model. Our network is trained to be aware of the uncertainty in its current predictions to efficiently gate at each iteration, estimating variances after each loop for its keypoint estimates. Additionally, we investigate the effectiveness of end-to-end and progressive training protocols for our recursive structure on maximizing the model capacity. With the proposed setting, our method consistently outperforms state-of-the-art 2D/3D hand pose estimation approaches in terms of both accuracy and efficiency for two widely used benchmarks (e.g., up to 4.9x reduction in GFLOPs and 12.5x fewer parameters than the current SOTA, ACE-Net, while achieving 5.1% AUC improvement on the FPHA dataset).

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yang_2022_WACV, author = {Yang, John and Bhalgat, Yash and Chang, Simyung and Porikli, Fatih and Kwak, Nojun}, title = {Dynamic Iterative Refinement for Efficient 3D Hand Pose Estimation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {1869-1879} }