Shape Prior is Not All You Need: Discovering Balance between Texture and Shape bias in CNN

Hyunhee Chung, Kyung Ho Park; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 4160-4175

Abstract


As Convolutional Neural Network (CNN) trained under ImageNet is known to be biased in image texture rather than object shapes, recent works proposed that elevating shape awareness of the CNNs makes them similar to human visual recognition. However, beyond the ImageNet-trained CNN, how can we make CNNs similar to human vision in the wild? In this paper, we present a series of analyses to answer this question. First, we propose AdaBA, a novel method of quantitatively illustrating CNN's shape and texture bias by resolving several limits of the prior method. With the proposed AdaBA, we focused on fine-tuned CNN's bias landscape which previous studies have not dealt with. We discover that fine-tuned CNNs are also biased to texture, but their bias strengths differ along with the downstream dataset; thus, we presume a data distribution is a root cause of texture bias exists. To tackle this root cause, we propose a granular labeling scheme, a simple but effective solution that redesigns the label space to pursue a balance between texture and shape biases. We empirically examine that the proposed scheme escalates CNN's classification and OOD detection performance. We expect key findings and proposed methods in the study to elevate understanding of the CNN and yield an effective solution to mitigate this texture bias.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Chung_2022_ACCV, author = {Chung, Hyunhee and Park, Kyung Ho}, title = {Shape Prior is Not All You Need: Discovering Balance between Texture and Shape bias in CNN}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {4160-4175} }