Horizontal Flipping Assisted Disentangled Feature Learning for Semi-Supervised Person Re-Identification

Gehan Hao, Yang Yang, Xue Zhou, Guanan Wang, Zhen Lei; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


In this paper, we propose to learn a powerful Re-ID model by using less labeled data together with lots of unlabeled data,i.e.semi-supervised Re-ID. Such kind of learning enables Re-ID model to be more generalizable and scalable to real-world scenes. Specifically, we design a two-stream encoder-decoder-based structure with shared modules and parameters. For the encoder module, we take the original person image with its horizontal mirror image as a pair of inputs and encode deep features with identity and structural information properly disentangled. Then different combinations of disentangling features are used to reconstruct images in the decoder module. In addition to the commonly used constraints from identity consistency and image reconstruction consistency for loss function definition, we design a novel loss function of en-forcing consistent transformation constraints on disentangled features. It is free of labels, but can be applied to both supervised and unsupervised learning branches in our model. Extensive results on four Re-ID datasets demonstrate that by reducing 5/6 labeled data, Our method achieves the best performance on Market-1501 and CUHK03, and comparable accuracy on DukeMTMC-reID and MSMT17.

Related Material


[pdf]
[bibtex]
@InProceedings{Hao_2020_ACCV, author = {Hao, Gehan and Yang, Yang and Zhou, Xue and Wang, Guanan and Lei, Zhen}, title = {Horizontal Flipping Assisted Disentangled Feature Learning for Semi-Supervised Person Re-Identification}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }