Learnable Fractional Reaction-Diffusion Dynamics for Under-Display ToF Imaging and Beyond

Xin Qiao, Matteo Poggi, Xing Wei, Pengchao Deng, Yanhui Zhou, Stefano Mattoccia; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 6080-6090

Abstract


Under-display ToF imaging aims to achieve accurate depth sensing through a ToF camera placed beneath a screen panel. However, transparent OLED (TOLED) layers introduce severe degradations--such as signal attenuation, multi-path interference (MPI), and temporal noise--that significantly compromise depth quality. To alleviate this drawback, we propose Learnable Fractional Reaction-Diffusion Dynamics (LFRD^2), a hybrid framework that combines the expressive power of neural networks with the interpretability of physical modeling. Specifically, we implement a time-fractional reaction-diffusion module that enables iterative depth refinement with dynamically generated differential orders, capturing long-term dependencies. In addition, we introduce an efficient continuous convolution operator via coefficient prediction and repeated differentiation to further improve restoration quality. Experiments on four benchmark datasets demonstrate the effectiveness of our approach. The code is publicly available at https://github.com/wudiqx106/LFRD2.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Qiao_2025_ICCV, author = {Qiao, Xin and Poggi, Matteo and Wei, Xing and Deng, Pengchao and Zhou, Yanhui and Mattoccia, Stefano}, title = {Learnable Fractional Reaction-Diffusion Dynamics for Under-Display ToF Imaging and Beyond}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {6080-6090} }