Predicting With Confidence on Unseen Distributions

Devin Guillory, Vaishaal Shankar, Sayna Ebrahimi, Trevor Darrell, Ludwig Schmidt; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1134-1144

Abstract


Recent work has shown that the accuracy of machine learning models can vary substantially when evaluated on a distribution that even slightly differs from that of the training data. As a result, predicting model performance on previously unseen distributions without access to labeled data is an important challenge with implications for increasing the reliability of machine learning models. In the context of distribution shift, distance measures are often used to adapt models and improve their performance on new domains, however accuracy estimation is seldom explored in these investigations. Our investigation determines that common distributional distances such as Frechet distance or Maximum Mean Discrepancy, fail to induce reliable estimates of performance under distribution shift. On the other hand, we find that our proposed difference of confidences (DoC) approach yields successful estimates of a classifier's performance over a variety of shifts and model architectures. Despite its simplicity, we observe that DoC outperforms other methods across synthetic, natural, and adversarial distribution shifts, reducing error by (>46%) on several realistic and challenging datasets such as ImageNet-Vid-Robust and ImageNet-Rendition.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Guillory_2021_ICCV, author = {Guillory, Devin and Shankar, Vaishaal and Ebrahimi, Sayna and Darrell, Trevor and Schmidt, Ludwig}, title = {Predicting With Confidence on Unseen Distributions}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1134-1144} }