-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Kondapaneni_2024_CVPR, author = {Kondapaneni, Neehar and Marks, Markus and Knott, Manuel and Guimaraes, Rogerio and Perona, Pietro}, title = {Text-Image Alignment for Diffusion-Based Perception}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {13883-13893} }
Text-Image Alignment for Diffusion-Based Perception
Abstract
Diffusion models are generative models with impressive text-to-image synthesis capabilities and have spurred a new wave of creative methods for classical machine learning tasks. However the best way to harness the perceptual knowledge of these generative models for visual tasks is still an open question. Specifically it is unclear how to use the prompting interface when applying diffusion backbones to vision tasks. We find that automatically generated captions can improve text-image alignment and significantly enhance a model's cross-attention maps leading to better perceptual performance. Our approach improves upon the current state-of-the-art in diffusion-based semantic segmentation on ADE20K and the current overall SOTA for depth estimation on NYUv2. Furthermore our method generalizes to the cross-domain setting. We use model personalization and caption modifications to align our model to the target domain and find improvements over unaligned baselines. Our cross-domain object detection model trained on Pascal VOC achieves SOTA results on Watercolor2K. Our cross-domain segmentation method trained on Cityscapes achieves SOTA results on Dark Zurich-val and Nighttime Driving. Project page: vision.caltech.edu/TADP/. Code: github.com/damaggu/TADP
Related Material