- [pdf] [supp] [arXiv]
Anomaly Detection With Domain Adaptation
Despite great advances have been made in the field of domain adaptation (DA), the vast majority of current methods in DA solve classical ML tasks, e.g. classification. In this paper, we study a novel research direction: semi-supervised anomaly detection with domain adaptation. Given a set of normal data from a source domain and a limited number of normal examples from a target domain, the goal is to have a well-performing anomaly detector in the target domain. We then present the Invariant Representation Anomaly Detection (IRAD) to solve this problem where we first learn to extract a domain-invariant representation. The extraction is achieved by an across-domain encoder trained together with source-specific encoders and generators by adversarial learning. An anomaly detector is then trained using the learnt representations. We evaluate IRAD extensively on anomaly detection datasets, object recognition datasets and digits benchmarks. Experimental results show that IRAD outperforms baseline models by a wide margin across different datasets. We derive a theoretical lower bound for the joint error that explains the performance decay from overtraining and also an upper bound for the generalization error.