SPIN: Simplifying Polar Invariance for Neural Networks Application to Vision-Based Irradiance Forecasting

Quentin Paletta, Anthony Hu, Guillaume Arbod, Philippe Blanc, Joan Lasenby; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2022, pp. 5182-5191

Abstract


Translational invariance induced by pooling operations is an inherent property of convolutional neural networks, which facilitates numerous computer vision tasks such as classification. Yet to leverage rotational invariant tasks, convolutional architectures require specific rotational invariant layers or extensive data augmentation to learn from diverse rotated versions of a given spatial configuration. Unwrapping the image into its polar coordinates provides a more explicit representation to train a convolutional architecture as the rotational invariance becomes translational, hence the visually distinct but otherwise equivalent rotated versions of a given scene can be learnt from a single image. We show with two common vision-based solar irradiance forecasting challenges (i.e. using ground-taken sky images or satellite images), that this preprocessing step significantly improves prediction results by standardising the scene representation, while decreasing training time by a factor of 4 compared to augmenting data with rotations. In addition, this transformation magnifies the area surrounding the centre of the rotation, leading to more accurate short-term irradiance predictions.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Paletta_2022_CVPR, author = {Paletta, Quentin and Hu, Anthony and Arbod, Guillaume and Blanc, Philippe and Lasenby, Joan}, title = {SPIN: Simplifying Polar Invariance for Neural Networks Application to Vision-Based Irradiance Forecasting}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {5182-5191} }