Collaborative Deep Reinforcement Learning for Joint Object Search

Xiangyu Kong, Bo Xin, Yizhou Wang, Gang Hua; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1695-1704

Abstract


We examine the problem of joint top-down active search of multiple objects under interaction, e.g., person riding a bicycle, cups held by the table, etc.. Such objects under interaction often can provide contextual cues to each other to facilitate more efficient search. By treating each detector as an agent, we present the first collaborative multi-agent deep reinforcement learning algorithm to learn the optimal policy for joint active object localization, which effectively exploits such beneficial contextual information. We learn inter-agent communication through cross connections with gates between the Q-networks, which is facilitated by a novel multi-agent deep Q-learning algorithm with joint exploitation sampling. We verify our proposed method on multiple object detection benchmarks. Not only does our model help to improve the performance of state-of-the-art active localization models, it also reveals interesting co-detection patterns that are intuitively interpretable.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kong_2017_CVPR,
author = {Kong, Xiangyu and Xin, Bo and Wang, Yizhou and Hua, Gang},
title = {Collaborative Deep Reinforcement Learning for Joint Object Search},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {July},
year = {2017}
}