Aggregating Deep Pyramidal Representations for Person Re-Identification

Niki Martinel, Gian Luca Foresti, Christian Micheloni; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0


Learning discriminative, view-invariant and multi-scale representations of person appearance with different semantic levels is of paramount importance for person Re-Identification (Re-ID). A surge of effort has been spent by the community to learn deep Re-ID models capturing a holistic single semantic level feature representation. To improve the achieved results, additional visual attributes and body part-driven models have been considered. However, these require extensive human annotation labor or demand additional computational efforts. We argue that a pyramid-inspired method capturing multi-scale information may overcome such requirements. Precisely, multi-scale stripes that represent visual information of a person can be used by a novel architecture factorizing them into latent discriminative factors at multiple semantic levels. A multi-task loss is combined with a curriculum learning strategy to learn a discriminative and invariant person representation which is exploited for triplet-similarity learning. Results on three benchmark Re-ID datasets demonstrate that better performance than existing methods are achieved (e.g., more than 90% accuracy on the Duke-MTMC dataset).

Related Material

author = {Martinel, Niki and Luca Foresti, Gian and Micheloni, Christian},
title = {Aggregating Deep Pyramidal Representations for Person Re-Identification},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}