SEIC: Semantic Embedding with Intermediate Classes for Zero-Shot Domain Generalization
In this work, we address the Zero-Shot Domain Generalization (ZSDG) task, where the goal is to learn a model from multiple source domains, such that it can generalize well to both unseen classes and unseen domains during testing. Since it combines the tasks of Domain Generalization (DG) and Zero-Shot Learning (ZSL), here we explore whether advances in these fields also translate to improved performance for the ZSDG task. Specifically, we build upon a state-of-the-art approach for domain generalization and appropriately modify it such that it can generalize to unseen classes during the testing stage. Towards this goal, we propose to make the feature embedding space semantically meaningful, by not only making an image feature close to its semantic attributes, but also taking into account its similarity with the other neighbouring classes. In addition, in order to reserve space for the unseen classes in the embedding space, we propose to introduce pseudo intermediate classes in between the semantically similar classes during training. This reduces confusion of the similar classes and thus increases the discriminability of the embedding space. Extensive experiments on two large-scale benchmark datasets, namely DomainNet and DomainNet-LS and comparisons with the state-of-the-art approaches show that the proposed framework outperforms all the other techniques on both the datasets.