Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach

Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 4380-4390

Abstract


For unsupervised domain adaptation, the target domain error can be provably reduced by having a shared input representation that makes the source and target domains indistinguishable from each other. Very recently it has been shown that it is not only critical to match the marginal input distributions, but also align the output class distributions. The latter can be achieved by minimizing the maximum discrepancy of predictors. In this paper, we take this principle further by proposing a more systematic and effective way to achieve hypothesis consistency using Gaussian processes (GP). The GP allows us to induce a hypothesis space of classifiers from the posterior distribution of the latent random functions, turning the learning into a large-margin posterior separation problem, significantly easier to solve than previous approaches based on adversarial minimax optimization. We formulate a learning objective that effectively influences the posterior to minimize the maximum discrepancy. This is shown to be equivalent to maximizing margins and minimizing uncertainty of the class predictions in the target domain. Empirical results demonstrate that our approach leads to state-to-the-art performance superior to existing methods on several challenging benchmarks for domain adaptation.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Kim_2019_CVPR,
author = {Kim, Minyoung and Sahu, Pritish and Gholami, Behnam and Pavlovic, Vladimir},
title = {Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}