Rethinking Common Assumptions To Mitigate Racial Bias in Face Recognition Datasets

Matthew Gwilliam, Srinidhi Hegde, Lade Tinubu, Alex Hanson; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 4123-4132

Abstract


Many existing works have made great strides towards reducing racial bias in face recognition. However, most of these methods attempt to rectify bias that manifests in models during training instead of directly addressing a major source of the bias, the dataset itself. Exceptions to this are BUPT-Balancedface/RFW and Fairface, but these works assume that primarily training on a single race or not racially balancing the dataset are inherently disadvantageous. We demonstrate that these assumptions are not necessarily valid. In our experiments, training on only African faces induced less bias than training on a balanced distribution of faces and distributions skewed to include more African faces produced more equitable models. We additionally notice that adding more images of existing identities to a dataset in place of adding new identities can lead to accuracy boosts across racial categories.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Gwilliam_2021_ICCV, author = {Gwilliam, Matthew and Hegde, Srinidhi and Tinubu, Lade and Hanson, Alex}, title = {Rethinking Common Assumptions To Mitigate Racial Bias in Face Recognition Datasets}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {4123-4132} }