Environment Upgrade Reinforcement Learning for Non-Differentiable Multi-Stage Pipelines

Shuqin Xie, Zitian Chen, Chao Xu, Cewu Lu; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3810-3819

Abstract


Recent advances in multi-stage algorithms have shown great promise, but two important problems still remain. First of all, at inference time, information can't feed back from downstream to upstream. Second, at training time, end-to-end training is not possible if the overall pipeline involves non-differentiable functions, and so different stages can't be jointly optimized. In this paper, we propose a novel environment upgrade reinforcement learning framework to solve the feedback and joint optimization problems. Our framework re-links the downstream stage to the upstream stage by a reinforcement learning agent. While training the agent to improve final performance by refining the upstream stage's output, we also upgrade the downstream stage (environment) according to the agent's policy. In this way, agent policy and environment are jointly optimized. We propose a training algorithm for this framework to address the different training demands of agent and environment. Experiments on instance segmentation and human pose estimation demonstrate the effectiveness of the proposed framework.

Related Material


[pdf] [supp] [video]
[bibtex]
@InProceedings{Xie_2018_CVPR,
author = {Xie, Shuqin and Chen, Zitian and Xu, Chao and Lu, Cewu},
title = {Environment Upgrade Reinforcement Learning for Non-Differentiable Multi-Stage Pipelines},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}