Similarity Learning With Spatial Constraints for Person Re-Identification

Dapeng Chen, Zejian Yuan, Badong Chen, Nanning Zheng; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 1268-1277


Pose variation remains one of the major factors that adversely affect the accuracy of person re-identification. Such variation is not arbitrary as body parts (e.g. head, torso, legs) have relative stable spatial distribution. Breaking down the variability of global appearance regarding the spatial distribution potentially benefits the person matching. We therefore learn a novel similarity function, which consists of multiple sub-similarity measurements with each taking in charge of a subregion. In particular, we take advantage of the recently proposed polynomial feature map to describe the matching within each subregion, and inject all the feature maps into a unified framework. The framework not only outputs similarity measurements for different regions, but also makes a better consistency among them. Our framework can collaborate local similarities as well as global similarity to exploit their complementary strength. It is flexible to incorporate multiple visual cues to further elevate the performance. In experiments, we analyze the effectiveness of the major components. The results on four datasets show significant and consistent improvements over the state-of-the-art methods.

Related Material

author = {Chen, Dapeng and Yuan, Zejian and Chen, Badong and Zheng, Nanning},
title = {Similarity Learning With Spatial Constraints for Person Re-Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}