Adversarial Reinforcement Learning for Unsupervised Domain Adaptation

Youshan Zhang, Hui Ye, Brian D. Davison; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 635-644

Abstract


Transferring knowledge from an existing labeled domain to a new domain often suffers from domain shift in which performance degrades because of differences between the domains. Domain adaptation has been a prominent method to mitigate such a problem. There have been many pre-trained neural networks for feature extraction. However, little work discusses how to select the best feature instances across different pre-trained models for both the source and target domain. We propose a novel approach to select features by employing reinforcement learning, which learns to select the most relevant features across two domains. Specifically, in this framework, we employ Q-learning to learn policies for an agent to make feature selection decisions by approximating the action-value function. After selecting the best features, we propose an adversarial distribution alignment learning to improve the prediction results. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2021_WACV, author = {Zhang, Youshan and Ye, Hui and Davison, Brian D.}, title = {Adversarial Reinforcement Learning for Unsupervised Domain Adaptation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {635-644} }