-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Song_2025_ICCV, author = {Song, Yiren and Chen, Danze and Shou, Mike Zheng}, title = {LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {19731-19741} }
LayerTracer: Cognitive-Aligned Layered SVG Synthesis via Diffusion Transformer
Abstract
Generating cognitive-aligned layered SVGs remains challenging due to existing methods' tendencies toward either oversimplified single-layer outputs or optimization-induced shape redundancies. We propose LayerTracer, a DiT based framework that bridges this gap by learning designers' layered SVG creation processes from a novel dataset of sequential design operations. Our approach operates in two phases: First, a text-conditioned DiT generates multi-phase rasterized construction blueprints that simulate human design workflows. Second, layer-wise vectorization with path deduplication produces clean, editable SVGs. For image vectorization, we introduce a conditional diffusion mechanism that encodes reference images into latent tokens, guiding hierarchical reconstruction while preserving structural integrity. Extensive experiments show that LayerTracer surpasses optimization-based and neural baselines in generation quality and editability.
Related Material
