Multi-Label Image Recognition by Recurrently Discovering Attentional Regions

Zhouxia Wang, Tianshui Chen, Guanbin Li, Ruijia Xu, Liang Lin; The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 464-472


This paper proposes a novel deep architecture to address multi-label image recognition, a fundamental and practical task towards general visual understanding. Current solutions for this task usually rely on an extra step of extracting hypothesis regions (i.e., region proposals), resulting in redundant computation and sub-optimal performance. In this work, we achieve the interpretable and contextualized multi-label image classification by developing a recurrent memorized-attention module. This module consists of two alternately performed components: i) a spatial transformer layer to locate attentional regions from the convolutional feature maps in a region-proposal-free way and ii) a LSTM (Long-Short Term Memory) sub-network to sequentially predict semantic labeling scores on the located regions while capturing the global dependencies of these regions. The LSTM also output the parameters for computing the spatial transformer. On large-scale benchmarks of multi-label image classification (e.g., MS-COCO and PASCAL VOC 07), our approach demonstrates superior performances over other existing state-of-the-arts in both accuracy and efficiency.

Related Material

author = {Wang, Zhouxia and Chen, Tianshui and Li, Guanbin and Xu, Ruijia and Lin, Liang},
title = {Multi-Label Image Recognition by Recurrently Discovering Attentional Regions},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}