Learning Shape Representations for Person Re-Identification Under Clothing Change

Yu-Jhe Li, Xinshuo Weng, Kris M. Kitani; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 2432-2441

Abstract


Person re-identification (re-ID) aims to recognize instances of the same person contained in multiple images taken across different cameras. Existing methods for re-ID tend to rely heavily on the assumption that both query and gallery images of the same person have the same clothing. Unfortunately, this assumption may not hold for datasets captured over long periods of time. To tackle the re-ID problem in the context of clothing changes, we propose a novel representation learning method which is able to generate a shape-based feature representation that is invariant to clothing. We call our model the Clothing Agnostic Shape Extraction Network (CASE-Net). CASE-Net learns a representation of a person that depends primarily on shape via adversarial learning and feature disentanglement. Quantitative and qualitative results across 5 datasets (Div-Market, Market1501, three large-scale datesets under clothing changes) show our approach makes significant improvements over prior state-of-the-art approaches.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2021_WACV, author = {Li, Yu-Jhe and Weng, Xinshuo and Kitani, Kris M.}, title = {Learning Shape Representations for Person Re-Identification Under Clothing Change}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {2432-2441} }