- [pdf] [arXiv]
Randomized Adversarial Style Perturbations for Domain Generalization
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP), which is motivated by the observation that the characteristics of each domain are captured by the feature statistics corresponding to its style. The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and prevents the model from being misled by the unexpected styles observed in unseen target domains. While RASP is effective for handling domain shifts, its naive integration into the training procedure is prone to degrade the capability of learning knowledge from source domains due to the feature distortions caused by style perturbation. This challenge is alleviated by Normalized Feature Mixup (NFM) during training, which facilitates learning the original features while achieving robustness to perturbed representations. We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.