Visual Transfer Between Atari Games Using Competitive Reinforcement Learning

Akshita Mittel, Purna Sowmya Munukutla; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 0-0

Abstract


Modern deep Reinforcement Learning (RL) methods are highly effective at selecting optimal policies to maximize rewards. The combination of these methods with Deep Learning approaches shows promise for challenging tasks by leveraging rich visual information for policy selection. In this paper, we explore the use of visual representations to transfer the knowledge of an RL agent from one domain to another. More specifically, we propose a method that can generalize for a target game using an RL agent trained for a source game in Atari 2600 environment. Instead of fine-tuning a pre-trained model for the target game, we propose a learning approach to update the model using multiple RL agents trained in parallel with different representations of the target game. The visual representations of the target game are generated by learning a visual mapping between the source game and the target game in an unsupervised manner. The visual mapping between sequences of transfer pairs has been shown to derive new representations of the target game; training on which improves the RL agent updates in terms of performance, data efficiency and stability. In order to demonstrate the effectiveness of this approach, the transfer learning procedure is evaluated on two pairs of Atari games taken in contrasting settings.

Related Material


[pdf]
[bibtex]
@InProceedings{Mittel_2019_CVPR_Workshops,
author = {Mittel, Akshita and Sowmya Munukutla, Purna},
title = {Visual Transfer Between Atari Games Using Competitive Reinforcement Learning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}