FFM: Injecting Out-of-Domain Knowledge via Factorized Frequency Modification

Zijian Wang, Yadan Luo, Zi Huang, Mahsa Baktashmotlagh; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4135-4144

Abstract


This work addresses the Single Domain Generalization (SDG) problem, and aims to generalize a model from a single source (i.e., training) domain to multiple target (i.e., test) domains with different distributions. Most of the existing SDG approaches aim at generating out-of-domain samples by either transforming the source images into different styles or optimizing adversarial noise perturbations. In this paper, we show that generating images with diverse styles can be complementary to creating hard samples when tackling the SDG task. This inspires us to propose our approach of Factorized Frequency Modification (FFM) which can fulfill the requirement of generating diverse and hard samples to tackle the problem of out-of-domain generalization. Specifically, we design a unified framework consisting of a style transformation module, an adversarial perturbation module, and a dynamic frequency selection module. We seamlessly equip the framework with iterative adversarial training which facilitates the task model to learn discriminative features from hard and diverse augmented samples. We perform extensive experiments on four image recognition benchmark datasets of Digits-DG, CIFAR-10-C, CIFAR-100-C, and PACS, which demonstrates that our method outperforms existing state-of-the-art approaches.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2023_WACV, author = {Wang, Zijian and Luo, Yadan and Huang, Zi and Baktashmotlagh, Mahsa}, title = {FFM: Injecting Out-of-Domain Knowledge via Factorized Frequency Modification}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {4135-4144} }