Streamlining Image Editing with Layered Diffusion Brushes

Peyman Gholami, Robert Xiao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 17368-17378

Abstract


Denoising diffusion models have emerged as powerful tools for image manipulation, yet interactive, localized editing workflows remain underdeveloped. We introduce Layered Diffusion Brushes (LDB), a novel training-free framework that enables interactive, layer-based editing using standard diffusion models. LDB defines each "layer" as a self-contained set of parameters guiding the generative process, enabling independent, non-destructive, and fine-grained prompt-guided edits, even in overlapping regions. LDB leverages a unique intermediate latent caching approach to reduce each edit to only a few denoising steps, achieving 140 ms per edit on consumer GPUs. An editor implementing LDB, incorporating familiar layer concepts, was evaluated via user study and quantitative metrics. Results demonstrate LDB's superior speed alongside comparable or improved image quality, background preservation, and edit fidelity relative to state-of-the-art methods across various sequential image manipulation tasks. The findings highlight LDB's ability to significantly enhance creative workflows by providing an intuitive and efficient approach to diffusion-based image editing and its potential for expansion into related subdomains, such as video editing.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Gholami_2025_ICCV, author = {Gholami, Peyman and Xiao, Robert}, title = {Streamlining Image Editing with Layered Diffusion Brushes}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {17368-17378} }