GPGait: Generalized Pose-based Gait Recognition

Yang Fu, Shibei Meng, Saihui Hou, Xuecai Hu, Yongzhen Huang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 19595-19604


Recent works on pose-based gait recognition have demonstrated the potential of using such simple information to achieve results comparable to silhouette-based methods. However, the generalization ability of pose-based methods on different datasets is undesirably inferior to that of silhouette-based ones, which has received little attention but hinders the application of these methods in real-world scenarios. To improve the generalization ability of pose-based methods across datasets, we propose a Generalized Pose-based Gait recognition (GPGait) framework. First, a Human-Oriented Transformation (HOT) and a series of Human-Oriented Descriptors (HOD) are proposed to obtain a unified pose representation with discriminative multi-features. Then, given the slight variations in the unified representation after HOT and HOD, it becomes crucial for the network to extract local-global relationships between the keypoints. To this end, a Part-Aware Graph Convolutional Network (PAGCN) is proposed to enable efficient graph partition and local-global spatial feature extraction. Experiments on four public gait recognition datasets, CASIA-B, OUMVLP-Pose, Gait3D and GREW, show that our model demonstrates better and more stable cross-domain capabilities compared to existing skeleton-based methods, achieving comparable recognition results to silhouette-based ones. Code is available at

Related Material

[pdf] [supp] [arXiv]
@InProceedings{Fu_2023_ICCV, author = {Fu, Yang and Meng, Shibei and Hou, Saihui and Hu, Xuecai and Huang, Yongzhen}, title = {GPGait: Generalized Pose-based Gait Recognition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {19595-19604} }