Double-Task Deep Q-Learning With Multiple Views

Jun Chen, Tingzhu Bai, Xiangsheng Huang, Xian Guo, Jianing Yang, Yuxing Yao; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 1050-1058

Abstract


Deep Reinforcement learning enables autonomous robots to learn large repertories of behavioral skill with minimal human intervention. However, the applications of direct deep reinforcement learning have been restricted. In this paper we introduce a new definition of action space and propose a double-task deep Q-Network with multiple views (DMDQN) based on double-DQN and dueling-DQN. For extension, we define multi-task model for more complex jobs.Moreover data augment policy is applied, which includes auto-sampling and action-overturn. The exploration policy is formed when DMDQN and data augment are combined. For robotic system's steady exploration, we designed the safety constraints according to working condition. Our experiments show that our double-task DQN with multiple views performs better than the single-task and single-view model. Combining our DMDQN and data augment, the robotic system can reach the object in an exploration way.

Related Material


[pdf]
[bibtex]
@InProceedings{Chen_2017_ICCV,
author = {Chen, Jun and Bai, Tingzhu and Huang, Xiangsheng and Guo, Xian and Yang, Jianing and Yao, Yuxing},
title = {Double-Task Deep Q-Learning With Multiple Views},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}