SG-Net: Semantic Guided Network for Image Dehazing

Tao Hong, Xiangyang Guo, Zeren Zhang, Jinwen Ma; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 2763-2779

Abstract


From traditional handcrafted priors to learning-based neural networks, image dehazing technique has gone through great development. In this paper, we propose an end-to-end Semantic Guided Network (SG-Net) for directly restoring the haze-free images. Inspired by the high similarity (mapping relationship) between the transmission maps and the segmentation results of hazy images, we found that the semantic information of the scene provides a strong natural prior for image restoration. To guide the dehazing more effectively and systematically, we utilize the information of semantic segmentation with three easily portable modes: Semantic Fusion (SF), Semantic Attention (SA), and Semantic Loss (SL), which compose our Semantic Guided (SG) mechanisms. By embedding these SG mechanisms into existing dehazing networks, we construct the SG-Net series: SG-AOD, SG-GCA, SG-FFA, and SG-AECR. The outperformance on image dehazing of these SG networks is demonstrated by the experiments in terms of both quantity and quality. It is worth mentioning that SG-FFA achieves the state-of-the-art performance.

Related Material


[pdf] [supp] [code]
[bibtex]
@InProceedings{Hong_2022_ACCV, author = {Hong, Tao and Guo, Xiangyang and Zhang, Zeren and Ma, Jinwen}, title = {SG-Net: Semantic Guided Network for Image Dehazing}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {2763-2779} }