InterActive: Inter-Layer Activeness Propagation

Lingxi Xie, Liang Zheng, Jingdong Wang, Alan L. Yuille, Qi Tian; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 270-279

Abstract


An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.

Related Material


[pdf]
[bibtex]
@InProceedings{Xie_2016_CVPR,
author = {Xie, Lingxi and Zheng, Liang and Wang, Jingdong and Yuille, Alan L. and Tian, Qi},
title = {InterActive: Inter-Layer Activeness Propagation},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}