MetricOpt: Learning To Optimize Black-Box Evaluation Metrics

Chen Huang, Shuangfei Zhai, Pengsheng Guo, Josh Susskind; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 174-183

Abstract


We study the problem of directly optimizing arbitrary non-differentiable task evaluation metrics such as misclassification rate and recall. Our method, named MetricOpt, operates in a black-box setting where the computational details of the target metric are unknown. We achieve this by learning a differentiable value function, which maps compact task-specific model parameters to metric observations. The learned value function is easily pluggable into existing optimizers like SGD and Adam, and is effective for rapidly finetuning a pre-trained model. This leads to consistent improvements since the value function provides effective metric supervision during finetuning, and helps to correct the potential bias of loss-only supervision. MetricOpt achieves state-of-the-art performance on a variety of metrics for (image) classification, image retrieval and object detection. Solid benefits are found over competing methods, which often involve complex loss design or adaptation. MetricOpt also generalizes well to new tasks and model architectures.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Huang_2021_CVPR, author = {Huang, Chen and Zhai, Shuangfei and Guo, Pengsheng and Susskind, Josh}, title = {MetricOpt: Learning To Optimize Black-Box Evaluation Metrics}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {174-183} }