Style Augmentation: Data Augmentation via Style Randomization

Philip T. Jackson, Amir Atapour-Abarghouei, Stephen Bonner, Toby P. Breckon, Boguslaw Obara; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 83-92

Abstract


We introduce style augmentation, a new form of data augmentation based on random style transfer, for improving the robustness of Convolutional Neural Networks (CNN) over both classification and regression based tasks. During training, style augmentation randomizes texture, contrast and color, while preserving shape and semantic content. This is accomplished by adapting an arbitrary style transfer network to perform style randomization, by sampling target style embeddings from a multivariate normal distribution instead of computing them from a style image. In addition to standard classification experiments, we investigate the effect of style augmentation (and data augmentation generally) on domain transfer tasks. We find that data augmentation significantly improves robustness to domain shift, and can be used as a simple, domain agnostic alternative to domain adaptation. Comparing style augmentation against a mix of seven traditional augmentation techniques, we find that it can be readily combined with them to improve network performance. We validate the efficacy of our technique with domain transfer experiments in classification and monocular depth estimation illustrating superior performance over benchmark tasks.

Related Material


[pdf]
[bibtex]
@InProceedings{Jackson_2019_CVPR_Workshops,
author = {Jackson, Philip T. and Atapour-Abarghouei, Amir and Bonner, Stephen and Breckon, Toby P. and Obara, Boguslaw},
title = {Style Augmentation: Data Augmentation via Style Randomization},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}