What Affects Learned Equivariance in Deep Image Recognition Models?

Robert-Jan Bruintjes, Tomasz Motyka, Jan van Gemert; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2023, pp. 4839-4847

Abstract


Equivariance w.r.t. geometric transformations in neural networks improves data efficiency, parameter efficiency and robustness to out-of-domain perspective shifts. When equivariance is not designed into a neural network, the network can still learn equivariant functions from the data. We quantify this learned equivariance, by proposing an improved measure for equivariance. We find evidence for a correlation between learned translation equivariance and validation accuracy on ImageNet. We therefore investigate what can increase the learned equivariance in neural networks, and find that data augmentation, reduced model capacity and inductive bias in the form of convolutions induce higher learned equivariance in neural networks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Bruintjes_2023_CVPR, author = {Bruintjes, Robert-Jan and Motyka, Tomasz and van Gemert, Jan}, title = {What Affects Learned Equivariance in Deep Image Recognition Models?}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {4839-4847} }