-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Mo_2025_WACV, author = {Mo, Wenyi and Zhang, Tianyu and Bai, Yalong and Su, Bing and Wen, Ji-Rong}, title = {Uniform Attention Maps: Boosting Image Fidelity in Reconstruction and Editing}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {4420-4429} }
Uniform Attention Maps: Boosting Image Fidelity in Reconstruction and Editing
Abstract
Text-guided image generation and editing using diffusion models have achieved remarkable advancements. Among these tuning-free methods have gained attention for their ability to perform edits without extensive model adjustments offering simplicity and efficiency. However existing tuning-free approaches often struggle with balancing fidelity and editing precision. Reconstruction errors in DDIM Inversion are partly attributed to the cross-attention mechanism in U-Net which introduces misalignments during the inversion and reconstruction process. To address this we analyze reconstruction from a structural perspective and propose a novel approach that replaces traditional cross-attention with uniform attention maps significantly enhancing image reconstruction fidelity. Our method effectively minimizes distortions caused by varying text conditions during noise prediction. To complement this improvement we introduce an adaptive mask-guided editing technique that integrates seamlessly with our reconstruction approach ensuring consistency and accuracy in editing tasks. Experimental results demonstrate that our approach not only excels in achieving high-fidelity image reconstruction but also performs robustly in real image composition and editing scenarios. This study underscores the potential of uniform attention maps to enhance the fidelity and versatility of diffusion-based image processing methods. Code is available at https://github.com/Mowenyii/Uniform-Attention-Maps.
Related Material