Maximally Compact and Separated Features with Regular Polytope Networks

Federico Pernici, Matteo Bruni, Claudio Baecchi, Alberto Del Bimbo; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 46-53

Abstract


Convolutional Neural Networks (CNNs) trained with the Softmax loss are widely used classification models for several vision tasks. Typically, a learnable transformation (i.e. the classifier) is placed at the end of such models returning class scores that are further normalized into probabilities by Softmax. This learnable transformation has a fundamental role in determining the network internal feature representation. In this work we show how to extract from CNNs features with the properties of maximum inter-class separability and maximum intra-class compactness by setting the parameters of the classifier transformation as not train- able (i.e. fixed). We obtain features similar to what can be obtained with the well-known OCenter LossO [1] and other similar approaches but with several practical advantages including maximal exploitation of the available feature space representation, reduction in the number of net- work parameters, no need to use other auxiliary losses besides the Softmax. Our approach unifies and generalizes into a common approach two apparently different classes of methods regarding: discriminative features, pioneered by the Center Loss [1] and fixed classifiers, firstly evaluated in [2]. Preliminary qualitative experimental results provide some insight on the potentialities of our combined strategy.

Related Material


[pdf]
[bibtex]
@InProceedings{Pernici_2019_CVPR_Workshops,
author = {Pernici, Federico and Bruni, Matteo and Baecchi, Claudio and Del Bimbo, Alberto},
title = {Maximally Compact and Separated Features with Regular Polytope Networks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}