Grouper: Optimizing Crowdsourced Face Annotations

Jocelyn C. Adams, Kristen C. Allen, Timothy Miller, Nathan D. Kalka, Anil K. Jain; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2016, pp. 95-102

Abstract


This study focuses on the problem of extracting consistent and accurate face bounding box annotations from crowdsourced workers. Aiming to provide benchmark datasets for facial recognition training and testing, we create a `gold standard' set against which consolidated face bounding box annotations can be evaluated. An evaluation methodology based on scores for several features of bounding box annotations is presented and is shown to predict consolidation performance using information gathered from crowdsourced annotations. Based on this foundation, we present "Grouper," a method leveraging density-based clustering to consolidate annotations by crowd workers. We demonstrate that the proposed consolidation scheme, which should be extensible to any number of region annotation consolidations, improves upon metadata released with the IARPA Janus Benchmark-A. Finally, we compare FR performance using the originally provided IJB-A annotations and Grouper and determine that similarity to the gold standard as measured by our evaluation metric does predict recognition performance.

Related Material


[pdf]
[bibtex]
@InProceedings{Adams_2016_CVPR_Workshops,
author = {Adams, Jocelyn C. and Allen, Kristen C. and Miller, Timothy and Kalka, Nathan D. and Jain, Anil K.},
title = {Grouper: Optimizing Crowdsourced Face Annotations},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2016}
}