Dynamic Support Information Mining for Category-Agnostic Pose Estimation

Pengfei Ren, Yuanyuan Gao, Haifeng Sun, Qi Qi, Jingyu Wang, Jianxin Liao; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 1921-1930

Abstract


Category-agnostic pose estimation (CAPE) aims to predict the pose of a query image based on few support images with pose annotations. Existing methods achieve the localization of arbitrary keypoints through similarity matching between support keypoint features and query image features. However these methods primarily focus on mining information from the query images neglecting the fact that support samples with keypoint annotations contain rich category-specific fine-grained semantic information and prior structural information. In this paper we propose a Support-based Dynamic Perception Network (SDPNet) for the robust and accurate CAPE. On the one hand SDPNet models complex dependencies between support keypoints constructing category-specific prior structure to guide the interaction of query keypoints. On the other hand SDPNet extracts fine-grained semantic information from support samples dynamically modulating the refinement process of query. Our method outperforms existing methods on MP-100 dataset by a large margin.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ren_2024_CVPR, author = {Ren, Pengfei and Gao, Yuanyuan and Sun, Haifeng and Qi, Qi and Wang, Jingyu and Liao, Jianxin}, title = {Dynamic Support Information Mining for Category-Agnostic Pose Estimation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {1921-1930} }