-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Lee_2024_CVPR, author = {Lee, Hsin-Ying and Tseng, Hung-Yu and Lee, Hsin-Ying and Yang, Ming-Hsuan}, title = {Exploiting Diffusion Prior for Generalizable Dense Prediction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {7861-7871} }
Exploiting Diffusion Prior for Generalizable Dense Prediction
Abstract
Contents generated by recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate due to the immitigable domain gap. We introduce DMP a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks. To address the misalignment between deterministic prediction tasks and stochastic T2I models we reformulate the diffusion process through a sequence of interpolations establishing a deterministic mapping between input RGB images and output prediction distributions. To preserve generalizability we use low-rank adaptation to fine-tune pre-trained models. Extensive experiments across five tasks including 3D property estimation semantic segmentation and intrinsic image decomposition showcase the efficacy of the proposed method. Despite limited-domain training data the approach yields faithful estimations for arbitrary images surpassing existing state-of-the-art algorithms.
Related Material