EGformer: Equirectangular Geometry-biased Transformer for 360 Depth Estimation

Ilwi Yun, Chanyong Shin, Hyunku Lee, Hyuk-Jae Lee, Chae Eun Rhee; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 6101-6112

Abstract


Estimating the depths of equirectangular (i.e., 360) images (EIs) is challenging given the distorted 180 x 360 field-of-view, which is hard to be addressed via convolutional neural network (CNN). Although a transformer with global attention achieves significant improvements over CNN for EI depth estimation task, it is computationally inefficient, which raises the need for transformer with local attention. However, to apply local attention successfully for EIs, a specific strategy, which addresses distorted equirectangular geometry and limited receptive field simultaneously, is required. Prior works have only cared either of them, resulting in unsatisfactory depths occasionally. In this paper, we propose an equirectangular geometry-biased transformer termed EGformer. While limiting the computational cost and the number of network parameters, EGformer enables the extraction of the equirectangular geometry-aware local attention with a large receptive field. To achieve this, we actively utilize the equirectangular geometry as the bias for the local attention instead of struggling to reduce the distortion of EIs. As compared to the most recent EI depth estimation studies, the proposed approach yields the best depth outcomes overall with the lowest computational cost and the fewest parameters, demonstrating the effectiveness of the proposed methods.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Yun_2023_ICCV, author = {Yun, Ilwi and Shin, Chanyong and Lee, Hyunku and Lee, Hyuk-Jae and Rhee, Chae Eun}, title = {EGformer: Equirectangular Geometry-biased Transformer for 360 Depth Estimation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {6101-6112} }