Semantic Segmentation of Fisheye Images

Gregor Blott, Masato Takami, Christian Heipke; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Semantic segmentation of fisheye images (e.g., from actioncameras or smartphones) requires different training approaches and data than those of rectilinear images obtained using central projection. The shape of objects is distorted depending on the distance between the principal point and the object position in the image. Therefore, classical semantic segmentation approaches fall short in terms of performance compared to rectilinear data. A potential solution to this problem is the recording and annotation of a new dataset, however this is expensive and tedious. In this study, an alternative approach that modifies the augmentation stage of deep learning training to re-use rectilinear training data is presented. In this way we obtain a considerably higher semantic segmentation performance on the fisheye images: +18.3% intersection over union (IoU) for action-camera test images, +8.3% IoU for artificially generated fisheye data, and +18.0% IoU for challenging security scenes acquired in bird’s eye view.

Related Material


[pdf]
[bibtex]
@InProceedings{Blott_2018_ECCV_Workshops,
author = {Blott, Gregor and Takami, Masato and Heipke, Christian},
title = {Semantic Segmentation of Fisheye Images},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}