Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs

Ming Qian, Jincheng Xiong, Gui-Song Xia, Nan Xue; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 3683-3692

Abstract


This paper aims to develop an accurate 3D geometry representation of satellite images using satellite-ground image pairs. Our focus is on the challenging problem of 3D-aware ground-views synthesis from a satellite image. We draw inspiration from the density field representation used in volumetric neural rendering and propose a new approach, called Sat2Density. Our method utilizes the properties of ground-view panoramas for the sky and non-sky regions to learn faithful density fields of 3D scenes in a geometric perspective. Unlike other methods that require extra depth information during training, our Sat2Density can automatically learn accurate and faithful 3D geometry via density representation without depth supervision. This advancement significantly improves the ground-view panorama synthesis task. Additionally, our study provides a new geometric perspective to understand the relationship between satellite and ground-view images in 3D space.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Qian_2023_ICCV, author = {Qian, Ming and Xiong, Jincheng and Xia, Gui-Song and Xue, Nan}, title = {Sat2Density: Faithful Density Learning from Satellite-Ground Image Pairs}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {3683-3692} }