Improving Robustness to Texture Bias via Shape-Focused Augmentation

Sangjun Lee, Inwoo Hwang, Gi-Cheon Kang, Byoung-Tak Zhang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 4323-4331

Abstract


Despite significant progress of deep neural networks in image classification, it has been reported that CNNs trained on ImageNet have heavily focused on local texture information, rather than capturing complex visual concepts of the objects. To delve into this phenomenon, recent studies proposed to generate images with modified texture information for training the model. However, these methods largely sacrifice the classification accuracy on the in-domain dataset while achieving improved performance on the out-of-distribution dataset. Motivated by the fact that human tends to focus on shape information, we aim to resolve this issue by proposing a shape-focused augmentation where the texture in the object's foreground and background are separately changed. Key idea is that by applying different modifications to the inside and outside of an object, not only the bias toward texture is reduced but also the model is induced to focus on shape. Experiments show that the proposed method successfully reduces texture bias and also improves the classification performance on the original dataset.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Lee_2022_CVPR, author = {Lee, Sangjun and Hwang, Inwoo and Kang, Gi-Cheon and Zhang, Byoung-Tak}, title = {Improving Robustness to Texture Bias via Shape-Focused Augmentation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {4323-4331} }