Evaluating the Impact of Wide-Angle Lens Distortion on Learning-Based Depth Estimation
Most computer vision research focuses on narrow angle lenses and is not adapted to super-wide-angle (aka spherical) lenses. This is mainly because current neural networks are not designed or trained to interpret the significant barrel distortion that is introduced in the captured image by such wide angle lenses.As these lenses capture a half-sphere or a section of sphere on the object space, barrel distortion appears when the image is projected on a 2D flat image sensor. By controlling this distortion at the lens design stage, camera designers can create some areas with augmented resolution. In this work, we present an analysis of the impact of such augmented resolution on computer vision algorithm accuracy, using the problem of single image depth estimation as a case study. To this end, 360deg panorama datasets are warped to simulate different wide-angle lens datasets, which are then used to train identical neural networks. Each lens presents specific areas of the image with augmented resolution using spatially-varying non-linear distortion. We show that this property leads to better local accuracy in depth estimation. We also demonstrate that considering lens manufacturing improves performance when tested on realistic lenses, especially in the area of augmented resolution. We further show that this property helps to locally come closer to performances obtained on perspective images without cropping the field of view.