A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration

Peter Welinder, Max Welling, Pietro Perona; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 3262-3269

Abstract


How many labeled examples are needed to estimate a classifier's performance on a new dataset? We study the case where data is plentiful, but labels are expensive. We show that by making a few reasonable assumptions on the structure of the data, it is possible to estimate performance curves, with confidence bounds, using a small number of ground truth labels. Our approach, which we call Semisupervised Performance Evaluation (SPE), is based on a generative model for the classifier's confidence scores. In addition to estimating the performance of classifiers on new datasets, SPE can be used to recalibrate a classifier by reestimating the class-conditional confidence distributions.

Related Material


[pdf]
[bibtex]
@InProceedings{Welinder_2013_CVPR,
author = {Welinder, Peter and Welling, Max and Perona, Pietro},
title = {A Lazy Man's Approach to Benchmarking: Semisupervised Classifier Evaluation and Recalibration},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2013}
}