-
[pdf]
[arXiv]
[bibtex]@InProceedings{Panangian_2025_WACV, author = {Panangian, Daniel and Bittner, Ksenia}, title = {Dfilled: Repurposing Edge-Enhancing Diffusion for Guided DSM Void Filling}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {February}, year = {2025}, pages = {563-571} }
Dfilled: Repurposing Edge-Enhancing Diffusion for Guided DSM Void Filling
Abstract
Digital Surface Models (DSMs) are essential for accurately representing Earth's topography in geospatial analyses. DSMs capture detailed elevations of natural and manmade features crucial for applications like urban planning vegetation studies and 3D reconstruction. However DSMs derived from stereo satellite imagery often contain voids or missing data due to occlusions shadows and lowsignal areas. Previous studies have primarily focused on void filling for digital elevation models (DEMs) and Digital Terrain Models (DTMs) employing methods such as inverse distance weighting (IDW) kriging and spline interpolation. While effective for simpler terrains these approaches often fail to handle the intricate structures present in DSMs. To overcome these limitations we introduce DFILLED a guided DSM void filling method that leverages optical remote sensing images through edge-enhancing diffusion. Dfilled repurposes deep anisotropic diffusion models which originally designed for super-resolution tasks to inpaint DSMs. Additionally we utilize Perlin noise to create inpainting masks that mimic natural void patterns in DSMs. Experimental evaluations demonstrate that Dfilled surpasses traditional interpolation methods and deep learning approaches in DSM void filling tasks. Both quantitative and qualitative assessments highlight the method's ability to manage complex features and deliver accurate visually coherent results.
Related Material