Patch-Based Discriminative Feature Learning for Unsupervised Person Re-Identification

Qize Yang, Hong-Xing Yu, Ancong Wu, Wei-Shi Zheng; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3633-3642

Abstract


While discriminative local features have been shown effective in solving the person re-identification problem, they are limited to be trained on fully pairwise labelled data which is expensive to obtain. In this work, we overcome this problem by proposing a patch-based unsupervised learning framework in order to learn discriminative feature from patches instead of the whole images. The patch-based learning leverages similarity between patches to learn a discriminative model. Specifically, we develop a PatchNet to select patches from the feature map and learn discriminative features for these patches. To provide effective guidance for the PatchNet to learn discriminative patch feature on unlabeled datasets, we propose an unsupervised patch-based discriminative feature learning loss. In addition, we design an image-level feature learning loss to leverage all the patch features of the same image to serve as an image-level guidance for the PatchNet. Extensive experiments validate the superiority of our method for unsupervised person re-id. Our code is available at https://github.com/QizeYang/PAUL.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2019_CVPR,
author = {Yang, Qize and Yu, Hong-Xing and Wu, Ancong and Zheng, Wei-Shi},
title = {Patch-Based Discriminative Feature Learning for Unsupervised Person Re-Identification},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}