Rotation Equivariant Vector Field Networks

Diego Marcos, Michele Volpi, Nikos Komodakis, Devis Tuia; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5048-5057


In many computer vision tasks, we expect a particular behavior of the output with respect to rotations of the input image. If this relationship is explicitly encoded, instead of treated as any other variation, the complexity of the problem is decreased, leading to a reduction in the size of the required model. We propose Rotation Equivariant vector field Networks (RotEqNet) to encode rotation equivariance and invariance into Convolutional Neural Networks (CNNs). Each convolutional filter is applied at multiple orientations and returns a vector field that represents the magnitude and angle of the highest scoring orientation at every spatial location. A modified convolution operator using vector fields as inputs and filters can then be applied to obtain deep architectures. We test RotEqNet on several problems requiring different responses with respect to the inputs' rotation: image classification, biomedical image segmentation, orientation estimation and patch matching. In all cases, we show that RotEqNet offers very compact models in terms of number of parameters and provides results in line to those of networks orders of magnitude larger.

Related Material

[pdf] [arXiv]
author = {Marcos, Diego and Volpi, Michele and Komodakis, Nikos and Tuia, Devis},
title = {Rotation Equivariant Vector Field Networks},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}