Noise-Aware Evaluation of Object Detectors

Jeffri Murrugarra Llerena, Claudio R. Jung; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 9304-9313

Abstract


Supervised object detection requires annotated datasets for training and evaluation purposes. However human annotation of large datasets is error-prone and frequent mistakes are erroneous labels missing objects and imprecise bounding boxes. The main goals of this work are to quantify the extent of annotation noise in terms of corner-wise discrepancies assess how it impacts evaluation metrics for object detection and propose noise-aware alternatives that serve as upper and lower bounds for a baseline metric. We focus our analysis on the Microsoft COCO dataset and re-evaluate several state-of-the-art object detectors using the proposed metrics. We show that the Average Precision (AP) metric might be considerably over or under-estimated particularly for small objects and restrictive IoU acceptance thresholds. Our code is available at https://github.com/Artcs1/Error-Aware.

Related Material


[pdf]
[bibtex]
@InProceedings{Llerena_2025_WACV, author = {Llerena, Jeffri Murrugarra and Jung, Claudio R.}, title = {Noise-Aware Evaluation of Object Detectors}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {9304-9313} }