Consensus-Driven Active Model Selection

Justin Kay, Grant Van Horn, Subhransu Maji, Daniel Sheldon, Sara Beery; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 4594-4604

Abstract


The widespread availability of off-the-shelf machine learning models poses a challenge: which model, of the many available candidates, should be chosen for a given data analysis task? This question of model selection is traditionally answered by collecting and annotating a validation dataset---a costly and time-intensive process. We propose a method for active model selection, using predictions from candidate models to prioritize the labeling of test data points that efficiently differentiate the best candidate. Our method, CODA, performs consensus-driven active model selection by modeling relationships between classifiers, categories, and data points within a probabilistic framework. The framework uses the consensus and disagreement between models in the candidate pool to guide the label acquisition process, and Bayesian inference to update beliefs about which model is best as more information is collected. We validate our approach by curating a collection of 26 benchmark tasks capturing a range of model selection scenarios.CODA outperforms existing methods for active model selection significantly, reducing the annotation effort required to discover the best model by upwards of 70% compared to the previous state-of-the-art. We will make our code and data public. Code and data are available at https://github.com/justinkay/coda.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kay_2025_ICCV, author = {Kay, Justin and Van Horn, Grant and Maji, Subhransu and Sheldon, Daniel and Beery, Sara}, title = {Consensus-Driven Active Model Selection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {4594-4604} }