Explainable Noisy Label Flipping for Multi-Label Fashion Image Classification

Beatriz Quintino Ferreira, Joao P. Costeira, Joao P. Gomes; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 3916-3920

Abstract


In online shopping applications, the daily insertion of new products requires an overwhelming annotation effort. Usually done by humans, it comes at a huge cost and yet generates high rates of noisy/missing labels that seriously hinder the effectiveness of CNNs in multi-label classification. We propose SELF-ML, a classification framework that exploits the relation between visual attributes and appearance together with the ""low-rank"" nature of the feature space. It learns a sparse reconstruction of image features as a convex combination of very few images - a basis - that are correctly annotated. Building on this representation, SELF-ML has a module that relabels noisy annotations from the derived combination of the clean data. Due to such structured reconstruction, SELF-ML gives an explanation of its label-flipping decisions. Experiments on a real-world shopping dataset show that SELF-ML significantly increases the number of correct labels even with few clean annotations.

Related Material


[pdf]
[bibtex]
@InProceedings{Ferreira_2021_CVPR, author = {Ferreira, Beatriz Quintino and Costeira, Joao P. and Gomes, Joao P.}, title = {Explainable Noisy Label Flipping for Multi-Label Fashion Image Classification}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3916-3920} }