Mitigating Algorithmic Bias: Evolving an Augmentation Policy that is Non-Biasing

Philip Smith, Karl Ricanek; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops, 2020, pp. 90-97

Abstract


Artificial Intelligence promises to make the world a safer place through automation. Automobiles can be steered between traffic lines, spoken words can be translated to textual commands, and wanted persons can be identified by law enforcement. These tasks, once only surmountable by humans, can now be performed by AIs with great speed and precision. If the algorithms are negatively biased against certain groups, what unforeseen harm may come to society? This work focuses on the classification of gender and age, a problem known to have systemic negative bias for certain subgroups, to investigate the role of data augmentation in the mitigation of such bias. A novel approach is proposed for mitigating bias in a deep learning algorithm that estimates age and gender. Settings for numerous data augmentation techniques are learned through an evolutionary process that optimizes data augmentation for specific subgroups. This approach proves to reduce systemic bias while also generalizing models and obtaining results that are state-of-the-art. The tools we use for determining human biometrics must be fair and non-discriminatory. This work examines not only bias, but also the insights gleaned from successful and unsuccessful policies in certain scenarios.

Related Material


[pdf]
[bibtex]
@InProceedings{Smith_2020_WACV,
author = {Smith, Philip and Ricanek, Karl},
title = {Mitigating Algorithmic Bias: Evolving an Augmentation Policy that is Non-Biasing},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops},
month = {March},
year = {2020}
}