-
[pdf]
[supp]
[bibtex]@InProceedings{Aminbeidokhti_2024_WACV, author = {Aminbeidokhti, Masih and Pe\~na, Fidel A. Guerrero and Medeiros, Heitor Rapela and Dubail, Thomas and Granger, Eric and Pedersoli, Marco}, title = {Domain Generalization by Rejecting Extreme Augmentations}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {2215-2225} }
Domain Generalization by Rejecting Extreme Augmentations
Abstract
Data augmentation is one of the most powerful techniques for regularizing deep learning models and improving their recognition performance in a variety of tasks and domains. However, this holds for standard in-domain settings, in which the training and test data follow the same distribution. For the out-domain, in which the test data follows a different and unknown distribution, the best recipe for data augmentation is not clear. In this paper, we show that also for out-domain or domain generalization settings, data augmentation can bring a conspicuous and robust improvement in performance. For doing that, we propose a simple procedure: i) use uniform sampling on standard data augmentation transformations ii) increase transformations strength to adapt to the higher data variance expected when working out of domain iii) devise a new reward function to reject extreme transformations that can harm the training. With this simple formula, our data augmentation scheme achieves comparable or better results to state-of-the-art performance on most domain generalization datasets.
Related Material