Leveraging Deep Reinforcement Learning for Reaching Robotic Tasks

Kapil Katyal, I-Jeng Wang, Philippe Burlina; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 18-19

Abstract


This work leverages Deep Reinforcement Learning (DRL) to make robotic control immune to changes in the robot manipulator or the environment and to perform reaching, collision avoidance and grasping without explicit, prior and fine knowledge of the human arm structure and kinematics, without careful hand-eye calibration, solely based on visual/retinal input, and in ways that are robust to environmental changes. We learn a manipulation policy which we show takes the first steps toward generalizing to changes in the environment and can scale and adapt to new manipulators. Experiments are aimed at a) comparing different DCNN network architectures b) assessing the reward prediction for two radically different manipulators and c) performing a sensitivity analysis comparing a classical visual servoing formulation of the reaching task with the proposed DRL method.

Related Material


[pdf]
[bibtex]
@InProceedings{Katyal_2017_CVPR_Workshops,
author = {Katyal, Kapil and Wang, I-Jeng and Burlina, Philippe},
title = {Leveraging Deep Reinforcement Learning for Reaching Robotic Tasks},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}