A Reinforcement Learning Approach to the View Planning Problem
Mustafa Devrim Kaba, Mustafa Gokhan Uzunbas, Ser Nam Lim; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 6933-6941
Abstract
We present a Reinforcement Learning (RL) solution to the view planning problem (VPP), which generates a sequence of view points that are capable of sensing all accessible area of a given object represented as a 3D model. In doing so, the goal is to minimize the number of view points, making the VPP a class of set covering optimization problem (SCOP). The SCOP is NP-hard, and the inapproximability results tell us that the greedy algorithm provides the best approximation that runs in polynomial time. In order to find a solution that is better than the greedy algorithm, (i) we introduce a novel score function by exploiting the geometry of the 3D model, (ii) we device an intuitive approach to VPP using this score function, and (iii) we cast VPP as a Markovian Decision Process (MDP), and solve the MDP in RL framework using well-known RL algorithms. In particular, we use SARSA, Watkins-Q and TD with function approximation to solve the MDP. We compare the results of our method with the baseline greedy algorithm in an extensive set of test objects, and show that we can outperform the baseline in almost all cases.
Related Material
[pdf]
[supp]
[arXiv]
[
bibtex]
@InProceedings{Kaba_2017_CVPR,
author = {Devrim Kaba, Mustafa and Gokhan Uzunbas, Mustafa and Nam Lim, Ser},
title = {A Reinforcement Learning Approach to the View Planning Problem},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}