Dynamic Computational Time for Visual Attention

Zhichao Li, Yi Yang, Xiao Liu, Feng Zhou, Shilei Wen, Wei Xu; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1199-1209

Abstract


We propose a dynamic computational time model to accelerate the average processing time for recurrent visual attention (RAM). Rather than attention with a fixed number of steps for each input image, the model learns to decide when to stop on the fly. To achieve this, we add an additional continue/stop action per time step to RAM and use reinforcement learning to learn both the optimal attention policy and stopping policy. The modification is simple but could dramatically save the average computational time while keeping the same recognition performance as RAM. Experimental results on CUB-200-2011 and Stanford Cars dataset demonstrate the dynamic computational model can work effectively for fine-grained image recognition.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Li_2017_ICCV,
author = {Li, Zhichao and Yang, Yi and Liu, Xiao and Zhou, Feng and Wen, Shilei and Xu, Wei},
title = {Dynamic Computational Time for Visual Attention},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}