Learning Representational Invariance Instead of Categorization

Alex Hernandez-Garcia, Peter Konig; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


The current most accurate models of image object categorization are deep neural networks trained on large labeled data sets. Minimizing a classification loss between the predictions of the network and the true labels has proven an effective way to learn discriminative functions of the object classes. However, recent studies have suggested that such models learn highly discriminative features that are not aligned with visual perception and might be at the root of adversarial vulnerability. Here, we propose to replace the classification loss with the joint optimization of invariance to identity-preserving transformations of images (data augmentation invariance), and the invariance to objects of the same category (class-wise invariance). We hypothesize that optimizing these invariance objectives might yield features more aligned with visual perception, more robust to adversarial perturbations, while still suitable for accurate object categorization.

Related Material


[pdf]
[bibtex]
@InProceedings{Hernandez-Garcia_2019_ICCV,
author = {Hernandez-Garcia, Alex and Konig, Peter},
title = {Learning Representational Invariance Instead of Categorization},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}