Robust Person Re-Identification by Modelling Feature Uncertainty

Tianyuan Yu, Da Li, Yongxin Yang, Timothy M. Hospedales, Tao Xiang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 552-561

Abstract


We aim to learn deep person re-identification (ReID) models that are robust against noisy training data. Two types of noise are prevalent in practice: (1) label noise caused by human annotator errors and (2) data outliers caused by person detector errors or occlusion. Both types of noise pose serious problems for training ReID models, yet have been largely ignored so far. In this paper, we propose a novel deep network termed DistributionNet for robust ReID. Instead of representing each person image as a feature vector, DistributionNet models it as a Gaussian distribution with its variance representing the uncertainty of the extracted features. A carefully designed loss is formulated in DistributionNet to unevenly allocate uncertainty across training samples. Consequently, noisy samples are assigned large variance/uncertainty, which effectively alleviates their negative impacts on model fitting. Extensive experiments demonstrate that our model is more effective than alternative noise-robust deep models. The source code is available at: https://github.com/TianyuanYu/DistributionNet

Related Material


[pdf]
[bibtex]
@InProceedings{Yu_2019_ICCV,
author = {Yu, Tianyuan and Li, Da and Yang, Yongxin and Hospedales, Timothy M. and Xiang, Tao},
title = {Robust Person Re-Identification by Modelling Feature Uncertainty},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}