Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions With Superior OOD Generalization

Damien Teney, Ehsan Abbasnejad, Simon Lucey, Anton van den Hengel; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16761-16772

Abstract


Neural networks trained with SGD were recently shown to rely preferentially on linearly-predictive features and can ignore complex, equally-predictive ones. This simplicity bias can explain their lack of robustness out of distribution (OOD). The more complex the task to learn, the more likely it is that statistical artifacts (i.e. selection biases, spurious correlations) are simpler than the mechanisms to learn. We demonstrate that the simplicity bias can be mitigated and OOD generalization improved. We train a set of similar models to fit the data in different ways using a penalty on the alignment of their input gradients. We show theoretically and empirically that this induces the learning of more complex predictive patterns. OOD generalization fundamentally requires information beyond i.i.d. examples, such as multiple training environments, counterfactual examples, or other side information. Our approach shows that we can defer this requirement to an independent model selection stage. We obtain SOTA results in visual recognition on biased data and generalization across visual domains. The method - the first to evade the simplicity bias - highlights the need for a better understanding and control of inductive biases in deep learning.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Teney_2022_CVPR, author = {Teney, Damien and Abbasnejad, Ehsan and Lucey, Simon and van den Hengel, Anton}, title = {Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions With Superior OOD Generalization}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {16761-16772} }