A Differentiable Distance Approximation for Fairer Image Classification

Nicholas E Rosa, Tom Drummond, Mehrtash Harandi; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 212-228

Abstract


Naively trained AI models can be heavily biased. This can be particularly problematic when the biases involve legally or morally protected attributes such as ethnic background, age or gender. Existing solutions to this problem come at the cost of extra computation, unstable adversarial optimisation or have losses on the feature space structure that are disconnected from fairness measures and only loosely generalise to fairness. In this work we propose a differentiable approximation of the variance of demographics, a metric that can be used to measure the bias, or unfairness, in an AI model. Our approximation can be optimised alongside the regular training objective which eliminates the need for any extra models during training and directly improves the fairness of the regularised models. We demonstrate that our approach improves the fairness of AI models in varied task and dataset scenarios, whilst still maintaining a high level of classification accuracy. Code is available at https://bitbucket.org/nelliottrosa/base_fairness.

Related Material


[pdf] [supp] [arXiv] [code]
[bibtex]
@InProceedings{E_Rosa_2022_ACCV, author = {E Rosa, Nicholas and Drummond, Tom and Harandi, Mehrtash}, title = {A Differentiable Distance Approximation for Fairer Image Classification}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {212-228} }