Semi Supervised Semantic Segmentation Using Generative Adversarial Network
Nasim Souly, Concetto Spampinato, Mubarak Shah; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 5688-5696
Abstract
Semantic segmentation has been a long standing challenging task in computer vision. It aims at assigning a label to each image pixel and needs a significant number of pixel-level annotated data, which is often unavailable. To address this lack of annotations, in this paper, we leverage, on one hand, a massive amount of available unlabeled or weakly labeled data, and on the other hand, non-realimages created through Generative Adversarial Networks. In particular, we propose a semi-supervised framework -based on Generative Adversarial Networks (GANs) - which consists of a generator network to provide extra training examples to a multi-class classifier, acting as discriminator in the GAN framework, that assigns sample a label y from the K possible classes or marks it as a fake sample (extra class). The underlying idea is that adding large fake visual data forces real samples to be close in the feature space, which, in turn, improves multiclass pixel classification. To ensure a higher quality of generated images by GANs with consequently improved pixel classification, we extend the above framework by adding weakly annotated data, i.e., we provide class level information to the generator. We test our approaches on several challenging benchmarking visual datasets, i.e. PASCAL, SiftFLow, Stanford and CamVid, achieving competitive performance compared to state-of-the-art semantic segmentation methods.
Related Material
[pdf]
[
bibtex]
@InProceedings{Souly_2017_ICCV,
author = {Souly, Nasim and Spampinato, Concetto and Shah, Mubarak},
title = { Semi Supervised Semantic Segmentation Using Generative Adversarial Network},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}