Classification Drives Geographic Bias in Street Scene Segmentation

Rahul Nair, Bhanu Tokas, Gabriel Tseng, Esther Rolf, Hannah Kerner; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 629-638

Abstract


Previous studies showed that image datasets lacking geographic diversity can lead to biased performance in models trained on them. Most prior works have studied geo-bias in general-purpose image datasets (e.g., ImageNet, OpenImages) using simple tasks like image classification. Recent works have studied geo-biases in application-based image datasets like driving datasets. However, they have only focused on coarse-grained localization tasks like 2D or 3D detection. In this work, we investigated geo-biases in a Eurocentric driving dataset (Cityscapes) on the fine-grained localization task of instance segmentation. Consistent with previous work, we found that instance segmentation models trained on European driving scenes (Eurocentric models) were geo-biased. Interestingly, we found that geo-biases came from classification errors rather than localization errors, with classification errors alone contributing 10-90 % of the geo-biases in segmentation and 19-88 % of the geo-biases in detection. Our findings suggest that if a user wants to directly apply region-specific models (e.g., Eurocentric models) globally, they may prefer to coarsen label categories (e.g., use a common label like 4-wheelers over labels like car, bus, and truck). Coarser labels can reduce classification errors, which, as we show in this work, is a major contributor to geo-bias.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Nair_2025_CVPR, author = {Nair, Rahul and Tokas, Bhanu and Tseng, Gabriel and Rolf, Esther and Kerner, Hannah}, title = {Classification Drives Geographic Bias in Street Scene Segmentation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {629-638} }