Low-Power Neural Networks for Semantic Segmentation of Satellite Images

Gaetan Bahl, Lionel Daniel, Matthieu Moretti, Florent Lafarge; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 0-0

Abstract


Semantic segmentation methods have made impressive progress with deep learning. However, while achieving higher and higher accuracy, state-of-the-art neural networks overlook the complexity of architectures, which typically feature dozens of millions of trainable parameters. Consequently, these networks requires high computational resources and are mostly not suited to perform on edge devices with tight resource constraints, such as phones, drones, or satellites. In this work, we propose two highly compact neural network architectures for semantic segmentation of images, which are up to 100 000 times less complex than state-of-the-art architectures while approaching their accuracy. To decrease the complexity of existing networks, our main ideas consist in exploiting lightweight encoders and decoders with depth-wise separable convolutions and decreasing memory usage with the removal of skip connections between encoder and decoder. Our architectures are designed to be implemented on a basic FPGA such as the one featured on the Intel Altera Cyclone V family of SoCs. We demonstrate the potential of our solutions in the case of binary segmentation of remote sensing images, in particular for extracting clouds and trees from RGB satellite images.

Related Material


[pdf]
[bibtex]
@InProceedings{Bahl_2019_ICCV,
author = {Bahl, Gaetan and Daniel, Lionel and Moretti, Matthieu and Lafarge, Florent},
title = {Low-Power Neural Networks for Semantic Segmentation of Satellite Images},
booktitle = {The IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}
}