Classes Are Not Equal: An Empirical Study on Image Recognition Fairness

Jiequan Cui, Beier Zhu, Xin Wen, Xiaojuan Qi, Bei Yu, Hanwang Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 23283-23292

Abstract


In this paper we present an empirical study on image recognition unfairness i.e. extreme class accuracy disparity on balanced data like ImageNet. We demonstrate that classes are not equal and unfairness is prevalent for image classification models across various datasets network architectures and model capacities. Moreover several intriguing properties of fairness are identified. First the unfairness lies in problematic representation rather than classifier bias distinguished from long-tailed recognition. Second with the proposed concept of Model Prediction Bias we investigate the origins of problematic representation during training optimization. Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize. It means that more other classes will be confused with harder classes. Then the False Positives (FPs) will dominate the learning in optimization thus leading to their poor accuracy. Further we conclude that data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Cui_2024_CVPR, author = {Cui, Jiequan and Zhu, Beier and Wen, Xin and Qi, Xiaojuan and Yu, Bei and Zhang, Hanwang}, title = {Classes Are Not Equal: An Empirical Study on Image Recognition Fairness}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {23283-23292} }