-
[pdf]
[arXiv]
[bibtex]@InProceedings{Guo_2025_WACV, author = {Guo, Yuxiang and Shah, Anshul and Liu, Jiang and Gupta, Ayush and Chellappa, Rama and Peng, Cheng}, title = {GaitContour: Efficient Gait Recognition Based on a Contour-Pose Representation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1051-1061} }
GaitContour: Efficient Gait Recognition Based on a Contour-Pose Representation
Abstract
Gait recognition holds the promise to robustly identify subjects based on walking patterns instead of appearance information. In recent years this field has been dominated by learning methods based on two input formats: silhouette images and sparse keypoints. Compared to image-based approaches keypoint-based methods can achieve significantly higher efficiency due to their sparsity. However sparsity also results in information loss thereby reducing performance. In this work we propose a novel keypoint-based Contour-Pose representation which compactly encodes both body shape and parts information. We further propose a local-to-global architecture called GaitContour to leverage this novel representation and efficiently compute subject embedding in two stages. The first stage consists of a local transformer that extracts features from five different body regions. The second stage then aggregates the regional features to estimate a global human gait representation. Such a design significantly reduces the complexity of the attention operation and improves both efficiency and performance. Through large scale experiments GaitContour is shown to perform significantly better than previous keypoint-based methods. Furthermore the Contour-Pose representation also achieves new SoTA performances on fusion-based gait recognition methods.
Related Material