Detecting the Unexpected via Image Resynthesis

Krzysztof Lis, Krishna Nakka, Pascal Fua, Mathieu Salzmann; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 2152-2161

Abstract


Classical semantic segmentation methods, including the recent deep learning ones, assume that all classes observed at test time have been seen during training. In this paper, we tackle the more realistic scenario where unexpected objects of unknown classes can appear at test time. The main trends in this area either leverage the notion of prediction uncertainty to flag the regions with low confidence as unknown, or rely on autoencoders and highlight poorly-decoded regions. Having observed that, in both cases, the detected regions typically do not correspond to unexpected objects, in this paper, we introduce a drastically different strategy: It relies on the intuition that the network will produce spurious labels in regions depicting unexpected objects. Therefore, resynthesizing the image from the resulting semantic map will yield significant appearance differences with respect to the input image. In other words, we translate the problem of detecting unknown classes to one of identifying poorly-resynthesized image regions. We show that this outperforms both uncertainty- and autoencoder-based methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Lis_2019_ICCV,
author = {Lis, Krzysztof and Nakka, Krishna and Fua, Pascal and Salzmann, Mathieu},
title = {Detecting the Unexpected via Image Resynthesis},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}