Focus Longer to See Better: Recursively Refined Attention for Fine-Grained Image Classification

Prateek Shroff, Tianlong Chen, Yunchao Wei, Zhangyang Wang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 868-869

Abstract


Deep Neural Network has shown great strides in the coarse-grained image classification task. It was in part due to its strong ability to extract discriminative feature representations from the images. However, the marginal visual difference between different classes in fine-grained images makes this very task harder. In this paper, we tried to focus on these marginal differences to extract more representative features. Similar to human vision, our network repetitively focuses on parts of images to spot small discriminative parts among the classes. Moreover, we show through interpretability techniques how our network focus changes from coarse to fine details. Through our experiments, we also show that a simple attention model can aggregate (weighted) these finer details to focus on the most dominant discriminative part of the image. Our network uses only image-level labels and does not need bounding box/part annotation information. Further, the simplicity of our network makes it an easy plug-n-play module. Apart from providing interpretability, our network boosts the performance (up to 2%) when compared to its baseline counterparts.

Related Material


[pdf]
[bibtex]
@InProceedings{Shroff_2020_CVPR_Workshops,
author = {Shroff, Prateek and Chen, Tianlong and Wei, Yunchao and Wang, Zhangyang},
title = {Focus Longer to See Better: Recursively Refined Attention for Fine-Grained Image Classification},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}