Unlabeled Samples Generated by GAN Improve the Person Re-Identification Baseline in Vitro
Zhedong Zheng, Liang Zheng, Yi Yang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 3754-3762
Abstract
The main contribution of this paper is a simple semi-supervised pipeline that only uses the original training set without collecting extra data. It is challenging in 1) how to obtain more training data only from the training set and 2) how to use the newly generated data. In this work, the generative adversarial network (GAN) is used to generate unlabeled samples. We propose the label smoothing regularization for outliers (LSRO). This method assigns a uniform label distribution to the unlabeled images, which regularizes the supervised model and improves the baseline. We verify the proposed method on a practical problem: person re-identification (re-ID). This task aims to retrieve a query person from other cameras. We adopt the deep convolutional generative adversarial network (DCGAN) for sample generation, and a baseline convolutional neural network (CNN) for representation learning. Experiments show that adding the GAN-generated data effectively improves the discriminative ability of learned CNN embeddings. On three large-scale datasets, Market-1501, CUHK03 and DukeMTMC-reID, we obtain +4.37%, +1.6% and +2.46% improvement in rank-1 precision over the baseline CNN, respectively. We additionally apply the proposed method to fine-grained bird recognition and achieve a +0.6% improvement over a strong baseline. The code is available at https://github.com/layumi/Person-reID_GAN.
Related Material
[pdf]
[arXiv]
[video]
[
bibtex]
@InProceedings{Zheng_2017_ICCV,
author = {Zheng, Zhedong and Zheng, Liang and Yang, Yi},
title = {Unlabeled Samples Generated by GAN Improve the Person Re-Identification Baseline in Vitro},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}