Towards Learning Affine-Invariant Representations via Data-Efficient CNNs

Wenju Xu, Guanghui Wang, Alan Sullivan, Ziming Zhang; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 904-913

Abstract


In this paper we propose integrating a priori knowledge into both design and training of convolutional neural networks (CNNs) to learn object representations that are invariant to affine transformations (i.e. translation, scale, rotation). Accordingly we propose a novel multi-scale maxout CNN and train it end-to-end with a novel rotation-invariant regularizer. This regularizer aims to enforce the weights in each 2D spatial filter to approximate circular patterns. In this way, we manage to handle affine transformations in training using convolution, multi-scale maxout, and circular filters. Empirically we demonstrate that such knowledge can significantly improve the data-efficiency as well as generalization and robustness of learned models. For instance, on the Traffic Sign data set and trained with only 10 images per class, our method can achieve 84.15% that outperforms the state-of-the-art by 29.80% in terms of test accuracy.

Related Material


[pdf]
[bibtex]
@InProceedings{Xu_2020_WACV,
author = {Xu, Wenju and Wang, Guanghui and Sullivan, Alan and Zhang, Ziming},
title = {Towards Learning Affine-Invariant Representations via Data-Efficient CNNs},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}