-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Yang_2023_ICCV, author = {Yang, Xiaodong and Ma, Zhuang and Ji, Zhiyu and Ren, Zhe}, title = {GEDepth: Ground Embedding for Monocular Depth Estimation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {12719-12727} }
GEDepth: Ground Embedding for Monocular Depth Estimation
Abstract
Monocular depth estimation is an ill-posed problem as
the same 2D image can be projected from infinite 3D scenes.
Although the leading algorithms in this field have reported
significant improvement, they are essentially geared to the
particular compound of pictorial observations and camera
parameters (i.e., intrinsics and extrinsics), strongly limit-
ing their generalizability in real-world scenarios. In or-
der to cope with this difficulty, this paper proposes a novel
ground embedding module to decouple camera parameters
from pictorial cues, thus promoting the generalization ca-
pability. Given camera parameters, our module generates
the ground depth, which is stacked with the input image and
referenced in the final depth prediction. A ground attention
is designed in the module to optimally combine the ground
depth with the residual depth. The proposed ground embed-
ding is highly flexible and lightweight, leading to a plug-in
module that is amenable to be integrated into various depth
estimation networks. Experiments reveal that our approach
achieves the state-of-the-art results on popular benchmarks,
and more importantly, renders significant improvement on
the cross-domain generalization.
Related Material