Estimating Example Difficulty Using Variance of Gradients

Chirag Agarwal, Daniel D'souza, Sara Hooker; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 10368-10378

Abstract


In machine learning, a question of great interest is understanding what examples are challenging for a model to classify. Identifying atypical examples ensures the safe deployment of models, isolates samples that require further human inspection, and provides interpretability into model behavior. In this work, we propose Variance of Gradients (VoG) as a valuable and efficient metric to rank data by difficulty and to surface a tractable subset of the most challenging examples for human-in-the-loop auditing. We show that data points with high VoG scores are far more difficult for the model to learn and over-index on corrupted or memorized examples. Further, restricting the evaluation to the test set instances with the lowest VoG improves the model's generalization performance. Finally, we show that VoG is a valuable and efficient ranking for out-of-distribution detection

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Agarwal_2022_CVPR, author = {Agarwal, Chirag and D'souza, Daniel and Hooker, Sara}, title = {Estimating Example Difficulty Using Variance of Gradients}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10368-10378} }