Dissecting the High-Frequency Bias in Convolutional Neural Networks

Antonio A. Abello, Roberto Hirata, Zhangyang Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 863-871

Abstract


For convolutional neural networks (CNNs), a common hypothesis that explains both their capability of generalization and their characteristic brittleness is that these models are implicitly regularized to rely on imperceptible high-frequency patterns, more than humans would do. This hypothesis has seen some empirical validation, but most works do not rigorously divide the image frequency spectrum. We present a model to divide the spectrum in disjointed discs based on the distribution of energy and apply simple feature importance procedures to test whether high-frequencies are more important than lower ones. We find evidence that mid or high-level frequencies are disproportionately important for CNNs. The evidence is robust across different datasets and networks. Moreover, we find the diverse effects of the network's attributes, such as architecture and depth, on frequency bias and robustness in general. Code for reproducing our experiments is available at: https://github.com/Abello966/FrequencyBiasExperiments

Related Material


[pdf]
[bibtex]
@InProceedings{Abello_2021_CVPR, author = {Abello, Antonio A. and Hirata, Roberto and Wang, Zhangyang}, title = {Dissecting the High-Frequency Bias in Convolutional Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {863-871} }