The Multiverse Loss for Robust Transfer Learning

Etai Littwin, Lior Wolf; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 3957-3966

Abstract


Deep learning techniques are renowned for supporting effective transfer learning. However, as we demonstrate, the transferred representations support only a few modes of separation and much of its dimensionality is unutilized. In this work we suggest to learn, in the source domain, multiple orthogonal classifiers. We prove that this leads to a reduced rank representation, which however supports more discriminative directions. Interestingly, the softmax probabilities produced by the multiple classifiers are likely to be identical. Extensive experimental results further demonstrate the effectiveness of our method.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Littwin_2016_CVPR,
author = {Littwin, Etai and Wolf, Lior},
title = {The Multiverse Loss for Robust Transfer Learning},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}