Gait Recognition via Disentangled Representation Learning

Ziyuan Zhang, Luan Tran, Xi Yin, Yousef Atoum, Xiaoming Liu, Jian Wan, Nanxin Wang; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4710-4719

Abstract


Gait, the walking pattern of individuals, is one of the most important biometrics modalities. Most of the existing gait recognition methods take silhouettes or articulated body models as the gait features. These methods suffer from degraded recognition performance when handling confounding variables, such as clothing, carrying and view angle. To remedy this issue, we propose a novel AutoEncoder framework to explicitly disentangle pose and appearance features from RGB imagery and the LSTM-based integration of pose features over time produces the gait feature. In addition, we collect a Frontal-View Gait (FVG) dataset to focus on gait recognition from frontal-view walking, which is a challenging problem since it contains minimal gait cues compared to other views. FVG also includes other important variations,e.g., walking speed, carrying, and clothing. With extensive experiments on CASIA-B, USF and FVG datasets, our method demonstrates superior performance to the-state-of-the-arts quantitatively, the ability of feature disentanglement qualitatively, and promising computational efficiency.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2019_CVPR,
author = {Zhang, Ziyuan and Tran, Luan and Yin, Xi and Atoum, Yousef and Liu, Xiaoming and Wan, Jian and Wang, Nanxin},
title = {Gait Recognition via Disentangled Representation Learning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}