Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance

Yuto Enyo, Ko Nishino; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 11873-11883

Abstract


Reflectance bounds the frequency spectrum of illumination in the object appearance. In this paper we introduce the first stochastic inverse rendering method which recovers the attenuated frequency spectrum of an illumination jointly with the reflectance of an object of known geometry from a single image. Our key idea is to solve this blind inverse problem in the reflectance map an appearance representation invariant to the underlying geometry by learning to reverse the image formation with a novel diffusion model which we refer to as the Diffusion Reflectance Map Network (DRMNet). Given an observed reflectance map converted and completed from the single input image DRMNet generates a reflectance map corresponding to a perfect mirror sphere while jointly estimating the reflectance. The forward process can be understood as gradually filtering a natural illumination with lower and lower frequency reflectance and additive Gaussian noise. DRMNet learns to invert this process with two subnetworks IllNet and RefNet which work in concert towards this joint estimation. The network is trained on an extensive synthetic dataset and is demonstrated to generalize to real images showing state-of-the-art accuracy on established datasets.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Enyo_2024_CVPR, author = {Enyo, Yuto and Nishino, Ko}, title = {Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {11873-11883} }