Quadtree Generating Networks: Efficient Hierarchical Scene Parsing with Sparse Convolutions

Kashyap Chitta, Jose M. Alvarez, Martial Hebert; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 2020-2029

Abstract


Semantic segmentation with Convolutional Neural Networks is a memory-intensive task due to the high spatial resolution of feature maps and output predictions. In this paper, we present Quadtree Generating Networks (QGNs), a novel approach able to drastically reduce the memory footprint of modern semantic segmentation networks. The key idea is to use quadtrees to represent the predictions and target segmentation masks instead of dense pixel grids. Our quadtree representation enables hierarchical processing of an input image, with the most computationally demanding layers only being used at regions in the image containing boundaries between classes. In addition, given a trained model, our representation enables flexible inference schemes to trade-off accuracy and computational cost, allowing the network to adapt in constrained situations such as embedded devices. We demonstrate the benefits of our approach on the Cityscapes, SUN-RGBD and ADE20k datasets. On Cityscapes, we obtain an relative 3% mIoU improvement compared to a dilated network with similar memory consumption; and only receive a 3% relative mIoU drop compared to a large dilated network, while reducing memory consumption by over 4x. Our code is available at https://github.com/kashyap7x/QGN.

Related Material


[pdf]
[bibtex]
@InProceedings{Chitta_2020_WACV,
author = {Chitta, Kashyap and Alvarez, Jose M. and Hebert, Martial},
title = {Quadtree Generating Networks: Efficient Hierarchical Scene Parsing with Sparse Convolutions},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}