FishEyeRecNet: A Multi-Context Collaborative Deep Network for Fisheye Image Rectification
Xiaoqing Yin, Xinchao Wang, Jun Yu, Maojun Zhang, Pascal Fua, Dacheng Tao; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 469-484
Abstract
Images captured by sheye lenses violate the pinhole camera assumption and suer from distortions. Rectication of sheye images is therefore a crucial preprocessing step for many computer vision applications. In this paper, we propose an end-to-end multi-context collaborative deep network for removing distortions from single sheye images. In contrast to conventional approaches, which focus on extracting hand-crafted features from input images, our method learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters. To facilitate training, we construct a synthesized dataset that covers various scenes and distortion parameter settings. Experiments on both synthesized and real-world datasets show that the proposed model signicantly outperforms current state-of-the-art methods. Our code and synthesized dataset will be made publicly available.
Related Material
[pdf]
[arXiv]
[
bibtex]
@InProceedings{Yin_2018_ECCV,
author = {Yin, Xiaoqing and Wang, Xinchao and Yu, Jun and Zhang, Maojun and Fua, Pascal and Tao, Dacheng},
title = {FishEyeRecNet: A Multi-Context Collaborative Deep Network for Fisheye Image Rectification},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}