Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-Identification

Yongming Rao, Guangyi Chen, Jiwen Lu, Jie Zhou; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 1025-1034

Abstract


Attention mechanism has demonstrated great potential in fine-grained visual recognition tasks. In this paper, we present a counterfactual attention learning method to learn more effective attention based on causal inference. Unlike most existing methods that learn visual attention based on conventional likelihood, we propose to learn the attention with counterfactual causality, which provides a tool to measure the attention quality and a powerful supervisory signal to guide the learning process. Specifically, we analyze the effect of the learned visual attention on network prediction through counterfactual intervention and maximize the effect to encourage the network to learn more useful attention for fine-grained image recognition. Empirically, we evaluate our method on a wide range of fine-grained visual recognition tasks where attention plays a crucial role, including fine-grained image categorization, person re-identification, and vehicle re-identification. The consistent improvement on all benchmarks demonstrates the effectiveness of our method.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Rao_2021_ICCV, author = {Rao, Yongming and Chen, Guangyi and Lu, Jiwen and Zhou, Jie}, title = {Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-Identification}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {1025-1034} }