Uncertainty-Based Thin Cloud Removal Network via Conditional Variational Autoencoders

Haidong Ding, Yue Zi, Fengying Xie; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 469-485


Existing thin cloud removal methods treat this image restoration task as a point estimation problem, and produce a single cloud-free image following a deterministic pipeline. In this paper, we propose a novel thin cloud removal network via Conditional Variational Autoencoders (CVAE) to generate multiple reasonable cloud-free images for each input cloud image. We analyze the image degradation process with a probabilistic graphical model and design the network in an encoder-decoder fashion. Since the diversity in sampling from the latent space, the proposed method can avoid the shortcoming caused by the inaccuracy of a single estimation. With the uncertainty analysis, we can generate a more accurate clear image based on these multiple predictions. Furthermore, we create a new benchmark dataset with cloud and clear image pairs from real-world scenes, overcoming the problem of poor generalization performance caused by training on synthetic datasets. Quantitative and qualitative experiments show that the proposed method significantly outperforms state-of-the-art methods on real-world cloud images. The source code and dataset are available at https://github.com/haidong-Ding/Cloud-Removal.

Related Material

[pdf] [supp] [code]
@InProceedings{Ding_2022_ACCV, author = {Ding, Haidong and Zi, Yue and Xie, Fengying}, title = {Uncertainty-Based Thin Cloud Removal Network via Conditional Variational Autoencoders}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {469-485} }