Using and Abusing Equivariance

Tom Edixhoven, Attila Lengyel, Jan C. van Gemert; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 119-128

Abstract


In this paper we show how Group Equivariant Convolutional Neural Networks use subsampling to learn to break equivariance to their symmetries. We focus on the 2D roto-translation group and investigate the impact of broken equivariance on network performance. We show that a change in the input dimension of a network as small as a single pixel can be enough for commonly used architectures to become approximately equivariant, rather than exactly. We investigate the impact of networks not being exactly equivariant and find that approximately equivariant networks generalise significantly worse to unseen symmetries compared to their exactly equivariant counterparts. However, when the symmetries in the training data are not identical to the symmetries of the network, we find that approximately equivariant networks are able to relax their own equivariant constraints, causing them to match or outperform exactly equivariant networks on common benchmark datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Edixhoven_2023_ICCV, author = {Edixhoven, Tom and Lengyel, Attila and van Gemert, Jan C.}, title = {Using and Abusing Equivariance}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {119-128} }