Towards an Understanding of Neural Networks in Natural-Image Spaces

Yifei Fan, Anthony Yezzi; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 75-78

Abstract


Two major uncertainties, dataset bias and adversarial examples, prevail in state-of-the-art AI algorithms with deep neural networks. In this paper, we present an intuitive explanation of these issues as well as an interpretation of the performance of deep networks in a natural- image space. The explanation consists of two parts: the variational-calculus view of machine learning and a hypothetical model of natural-image spaces. Following the explanation, we (1) demonstrate that the values of training samples differ, (2) provide incremental boosts to the accuracy of a CIFAR-10 classifier by introducing an additional "random-noise" category during training, and (3) alleviate over-fitting thereby enhancing the robustness of a classifier against adversarial examples by detecting and excluding illusive training samples that are consistently misclassified. Our overall contribution is therefore twofold. First, while most existing algorithms treat data equally and have a strong appetite for more data, we demonstrate in contrast that an individual datum can sometimes have disproportionate and counterproductive influence, and that it is not always better to train neural networks with more data. Next, we consider more thoughtful strategies by taking into account the geometric and topological properties of natural-image spaces to which deep networks are applied.

Related Material


[pdf]
[bibtex]
@InProceedings{Fan_2019_CVPR_Workshops,
author = {Fan, Yifei and Yezzi, Anthony},
title = {Towards an Understanding of Neural Networks in Natural-Image Spaces},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}