-
[pdf]
[bibtex]@InProceedings{Ma_2024_CVPR, author = {Ma, Kang and Fu, Ying and Cao, Chunshui and Hou, Saihui and Huang, Yongzhen and Zheng, Dezhi}, title = {Learning Visual Prompt for Gait Recognition}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {593-603} }
Learning Visual Prompt for Gait Recognition
Abstract
Gait a prevalent and complex form of human motion plays a significant role in the field of long-range pedestrian retrieval due to the unique characteristics inherent in individual motion patterns. However gait recognition in real-world scenarios is challenging due to the limitations of capturing comprehensive cross-viewing and cross-clothing data. Additionally distractors such as occlusions directional changes and lingering movements further complicate the problem. The widespread application of deep learning techniques has led to the development of various potential gait recognition methods. However these methods utilize convolutional networks to extract shared information across different views and attire conditions. Once trained the parameters and non-linear function become constrained to fixed patterns limiting their adaptability to various distractors in real-world scenarios. In this paper we present a unified gait recognition framework to extract global motion patterns and develop a novel dynamic transformer to generate representative gait features. Specifically we develop a trainable part-based prompt pool with numerous key-value pairs that can dynamically select prompt templates to incorporate into the gait sequence thereby providing task-relevant shared knowledge information. Furthermore we specifically design dynamic attention to extract robust motion patterns and address the length generalization issue. Extensive experiments on four widely recognized gait datasets i.e. Gait3D GREW OUMVLP and CASIA-B reveal that the proposed method yields substantial improvements compared to current state-of-the-art approaches.
Related Material