Two-Point Gait: Decoupling Gait from Body Shape

Stephen Lombardi, Ko Nishino, Yasushi Makihara, Yasushi Yagi; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 1041-1048


Human gait modeling (e.g., for person identification) largely relies on image-based representations that muddle gait with body shape. Silhouettes, for instance, inherently entangle body shape and gait. For gait analysis and recognition, decoupling these two factors is desirable. Most important, once decoupled, they can be combined for the task at hand, but not if left entangled in the first place. In this paper, we introduce Two-Point Gait, a gait representation that encodes the limb motions regardless of the body shape. Two-Point Gait is directly computed on the image sequence based on the two point statistics of optical flow fields. We demonstrate its use for exploring the space of human gait and gait recognition under large clothing variation. The results show that we can achieve state-of-the-art person recognition accuracy on a challenging dataset.

Related Material

author = {Lombardi, Stephen and Nishino, Ko and Makihara, Yasushi and Yagi, Yasushi},
title = {Two-Point Gait: Decoupling Gait from Body Shape},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}