A Multidimensional Analysis of Social Biases in Vision Transformers

Jannik Brinkmann, Paul Swoboda, Christian Bartelt; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4914-4923

Abstract


The embedding spaces of image models have been shown to encode a range of social biases such as racism and sexism. Here, we investigate specific factors that contribute to the emergence of these biases in Vision Transformers (ViT). Therefore, we measure the impact of training data, model architecture, and training objectives on social biases in the learned representations of ViTs. Our findings indicate that counterfactual augmentation training using diffusion-based image editing can mitigate biases, but does not eliminate them. Moreover, we find that larger models are less biased than smaller models, and that models trained using discriminative objectives are less biased than those trained using generative objectives. In addition, we observe inconsistencies in the learned social biases. To our surprise, ViTs can exhibit opposite biases when trained on the same data set using different self-supervised objectives. Our findings give insights into the factors that contribute to the emergence of social biases and suggests that we could achieve substantial fairness improvements based on model design choices.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Brinkmann_2023_ICCV, author = {Brinkmann, Jannik and Swoboda, Paul and Bartelt, Christian}, title = {A Multidimensional Analysis of Social Biases in Vision Transformers}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {4914-4923} }