Enhancing Fairness of Visual Attribute Predictors

Tobias Hänel, Nishant Kumar, Dmitrij Schlesinger, Mengze Li, Erdem Ünal, Abouzar Eslami, Stefan Gumhold; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 1211-1227

Abstract


The performance of deep neural networks for image recognition tasks such as predicting a smiling face is known to degrade with under-represented classes of sensitive attributes. We address this problem by introducing fairness-aware regularization losses based on batch estimates of Demographic Parity, Equalized Odds, and a novel Intersection-over-Union measure. The experiments performed on facial and medical images from CelebA, UTKFace, and the SIIM-ISIC melanoma classification challenge show the effectiveness of our proposed fairness losses for bias mitigation as they improve model fairness while maintaining high classification performance. To the best of our knowledge, our work is the first attempt to incorporate these types of losses in an end-to-end training scheme for mitigating biases of visual attribute predictors.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Hanel_2022_ACCV, author = {H\"anel, Tobias and Kumar, Nishant and Schlesinger, Dmitrij and Li, Mengze and \"Unal, Erdem and Eslami, Abouzar and Gumhold, Stefan}, title = {Enhancing Fairness of Visual Attribute Predictors}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1211-1227} }