Filmy Cloud Removal on Satellite Imagery With Multispectral Conditional Generative Adversarial Nets

Kenji Enomoto, Ken Sakurada, Weimin Wang, Hiroshi Fukui, Masashi Matsuoka, Ryosuke Nakamura, Nobuo Kawaguchi; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 48-56

Abstract


This paper proposes a method for cloud removal from visible light RGB satellite images by extending the conditional Generative Adversarial Networks (cGANs) from RGB images to multispectral images. The networks are trained to output images that are close to the ground truth with the input images synthesized with clouds on the ground truth images. In the available dataset, the ratio of images of the forest or the sea is very high, which will cause bias of the training dataset if we uniformly sample from the dataset. Thus, we utilize the t-Distributed Stochastic Neighbor Embedding (t-SNE) to improve the bias problem of the training dataset. Finally, we confirm the feasibility of the proposed networks on the dataset of 4 bands images including three visible light bands and one near-infrared (NIR) band.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Enomoto_2017_CVPR_Workshops,
author = {Enomoto, Kenji and Sakurada, Ken and Wang, Weimin and Fukui, Hiroshi and Matsuoka, Masashi and Nakamura, Ryosuke and Kawaguchi, Nobuo},
title = {Filmy Cloud Removal on Satellite Imagery With Multispectral Conditional Generative Adversarial Nets},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}