-
[pdf]
[supp]
[bibtex]@InProceedings{Bauchwitz_2025_WACV, author = {Bauchwitz, Benjamin R and Cummings, Mary}, title = {Task Configuration Impacts Annotation Quality and Model Training Performance in Crowdsourced Image Segmentation}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6646-6656} }
Task Configuration Impacts Annotation Quality and Model Training Performance in Crowdsourced Image Segmentation
Abstract
Many industrial image segmentation systems require training on large annotated datasets but there is little standardization for producing training annotations. Data is often obtained via crowdsourcing but many labeling task configurations are set by convenience and may have unintended effects on data quality. In this work we (1) demonstrate a new software tool for running crowdsourced image segmentation experiments (2) present a dataset capturing variation in segmentation annotations produced under different task configurations and (3) experimentally evaluate the quality of the annotations produced by these different configurations. We show annotation quality can be significantly improved by paying annotators per annotated object rather than per image and by leveraging paintbrush-style drawing tools rather than polygon or curve drag tools. We also show that some task complexity is required to maintain annotator engagement and sufficient task performance. Finally we show that many configuration-related annotation errors degrade model training performance but that models can tolerate error error patterns that are common across crowdsourced annotation schemes.
Related Material