-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Gabot_2026_WACV, author = {Gabot, Quentin and Lim, Teck-Yian and Fix, Jeremy and Frontera-Pons, Joana and Ren, Chengfang and Ovarlez, Jean-Philippe}, title = {Shift-Equivariant Complex-Valued Convolutional Neural Networks}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {March}, year = {2026}, pages = {2575-2584} }
Shift-Equivariant Complex-Valued Convolutional Neural Networks
Abstract
Convolutional neural networks have shown remarkable performance in recent years on various computer vision problems. However, the traditional convolutional neural network architecture lacks a critical property: shift equivariance and invariance, broken by downsampling and upsampling operations. Although data augmentation techniques can help the model learn the latter property empirically, a consistent and systemic way to achieve this goal is by designing downsampling and upsampling layers that theoretically guarantee these properties by construction. Adaptive Polyphase Sampling (APS)introduced the cornerstone for shift invariance, later extended to shift equivariance with Learnable Polyphase up/downsampling (LPS) applied to real-valued neural networks. In this paper, we extend the work on LPSto complex-valued neural networks both from a theoretical perspective and with a novel building block of a projection layer from \mathbb C to \mathbb R before the Gumbel Softmax. We finally evaluate this extension on several computer vision problems for either the invariance property with classification tasks or the equivariance with both reconstruction and semantic segmentation problems using polarimetric synthetic aperture images.
Related Material
