Addressing Model Vulnerability to Distributional Shifts Over Image Transformation Sets

Riccardo Volpi, Vittorio Murino; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 7980-7989

Abstract


We are concerned with the vulnerability of computer vision models to distributional shifts. We formulate a combinatorial optimization problem that allows evaluating the regions in the image space where a given model is more vulnerable, in terms of image transformations applied to the input, and face it with standard search algorithms. We further embed this idea in a training procedure, where we define new data augmentation rules according to the image transformations that the current model is most vulnerable to, over iterations. An empirical evaluation on classification and semantic segmentation problems suggests that the devised algorithm allows to train models that are more robust against content-preserving image manipulations and, in general, against distributional shifts.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Volpi_2019_ICCV,
author = {Volpi, Riccardo and Murino, Vittorio},
title = {Addressing Model Vulnerability to Distributional Shifts Over Image Transformation Sets},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}