Multi-Scale Context Intertwining for Semantic Segmentation

Di Lin, Yuanfeng Ji, Dani Lischinski, Daniel Cohen-Or, Hui Huang; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 603-619

Abstract


Accurate semantic image segmentation requires the joint consideration of local appearance, semantic information, and global scene context. In today’s age of pre-trained deep networks and their powerful convolutional features, state-of-the-art semantic segmentation approaches differ mostly in how they choose to combine together these different kinds of information. In this work, we propose a novel scheme for aggregating features from different scales, which we refer to as Multi-Scale Context Intertwining (MSCI). In contrast to previous approaches, which typically propagate information between scales in a one-directional manner, we merge pairs of feature maps in a bidirectional and recurrent fashion, via connections between two LSTM chains. By training the parameters of the LSTM units on the segmentation task, the above approach learns how to extract powerful and effective features for pixel-level semantic segmentation, which are then combined hierarchically. Furthermore, rather than using fixed information propagation routes, we subdivide images into super-pixels, and use the spatial relationship between them in order to perform image-adapted context aggregation. Our extensive evaluation on public benchmarks indicates that all of the aforementioned components of our approach increase the effectiveness of information propagation throughout the network, and significantly improve its eventual segmentation accuracy.

Related Material


[pdf]
[bibtex]
@InProceedings{Lin_2018_ECCV,
author = {Lin, Di and Ji, Yuanfeng and Lischinski, Dani and Cohen-Or, Daniel and Huang, Hui},
title = {Multi-Scale Context Intertwining for Semantic Segmentation},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}