Learning From Noisy Labels With Distillation
Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, Li-Jia Li; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1910-1918
Abstract
The ability of learning from noisy labels is very useful in many visual recognition tasks, as a vast amount of data with noisy labels are relatively easy to obtain. Traditionally, label noise has been treated as statistical outliers, and techniques such as importance re-weighting and bootstrapping have been proposed to alleviate the problem. According to our observation, the real-world noisy labels exhibit multi-mode characteristics as the true labels, rather than behaving like independent random outliers. In this work, we propose a unified distillation framework to use "side" information, including a small clean dataset and label relations in knowledge graph, to "hedge the risk" of learning from noisy labels. Unlike the traditional approaches evaluated based on simulated label noises, we propose a suite of new benchmark datasets, in Sports, Species and Artifacts domains, to evaluate the task of learning from noisy labels in the practical setting. The empirical study demonstrates the effectiveness of our proposed method in all the domains.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Li_2017_ICCV,
author = {Li, Yuncheng and Yang, Jianchao and Song, Yale and Cao, Liangliang and Luo, Jiebo and Li, Li-Jia},
title = {Learning From Noisy Labels With Distillation},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}