Learning to Find Correlated Features by Maximizing Information Flow in Convolutional Neural Networks

Wei Shen, Fei Li, Rujie Liu; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Training convolutional neural networks for image classification tasks usually causes information loss. Although most of the time the information is redundant with respect to the target task, there are still cases where discriminative information is also discarded. For example, if images that belong to the same category have multiple correlated features, the model may only learn a subset of the features and ignore the rest. This may not be a problem unless the classification in the test set highly relies on the ignored features. We argue that the discard of the correlated discriminative information is partially caused by the fact that the minimization of the classification loss doesn't ensure to learn all discriminative information but only the most discriminative information given the training set. To address this problem, we propose an information flow maximization (IFM) loss as a regularization term to find the discriminative correlated features. With less information loss the classifier can make predictions based on more informative features. We validate our method on the shiftedMNIST dataset and show the effectiveness of IFM loss in learning representative and discriminative features.

Related Material


[pdf]
[bibtex]
@InProceedings{Shen_2019_ICCV,
author = {Shen, Wei and Li, Fei and Liu, Rujie},
title = {Learning to Find Correlated Features by Maximizing Information Flow in Convolutional Neural Networks},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}