Spatial Memory for Context Reasoning in Object Detection

Xinlei Chen, Abhinav Gupta; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4086-4096

Abstract


Modeling instance-level context and object-object relationships is extremely challenging. It requires reasoning about bounding boxes of different locations, scales, aspect ratios etc.. Above all, instance-level spatial reasoning inherently requires modeling conditional distributions on previous detections. But our current object detection systems do not have any memory to remember what to condition on! The state-of-the-art object detectors still detect all object in parallel followed by non-maximal suppression (NMS). While memory has been used for tasks such as captioning and VQA, they use image-level memory cells without capturing the spatial layout. On the other hand, modeling object-object relationships requires spatial reasoning -- not only do we need a memory to store the spatial layout, but also a effective reasoning module to extract spatial patterns. This paper presents a conceptually simple yet powerful solution -- Spatial Memory Network (SMN), to model the instance-level context efficiently and effectively. Our spatial memory essentially assembles object instances back into a pseudo "image" representation that is easy to be fed into another ConvNet for object-object context reasoning. This leads to a new sequential reasoning architecture where image and memory are processed in parallel to obtain detections which update the memory again. We show our SMN architecture is effective as it provides 2.2% improvement over baseline Faster RCNN on the COCO dataset with VGG16.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Chen_2017_ICCV,
author = {Chen, Xinlei and Gupta, Abhinav},
title = {Spatial Memory for Context Reasoning in Object Detection},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}