-
[pdf]
[supp]
[bibtex]@InProceedings{Brioschi_2025_ICCV, author = {Brioschi, Riccardo and Alekseev, Aleksandr and Nevali, Emanuele and D\"oner, Berkay and El Malki, Omar and Mitrevski, Blagoj and Kieliger, Leandro and Collier, Mark and Maksai, Andrii and Berent, Jesse and Musat, Claudiu Cristian and Kokiopoulou, Efi}, title = {Sketch-to-Layout: Sketch-Guided Multimodal Layout Generation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {1872-1884} }
Sketch-to-Layout: Sketch-Guided Multimodal Layout Generation
Abstract
Graphic layout generation is a growing research area focusing on generating aesthetically pleasing layouts ranging from poster designs to documents. While recent research has explored ways to incorporate user constraints to guide the layout generation, these constraints often require complex specifications which reduce usability. We introduce an innovative approach exploiting user-provided sketches as intuitive constraints and we demonstrate empirically the effectiveness of this new guidance method, establishing the sketch-to-layout problem as an underexplored but promising research direction. To tackle the sketch-to-layout problem, we propose a multimodal transformer-based solution using the sketch and the content assets as inputs to produce high quality layouts. Since collecting sketch training data from human annotators is very costly, we introduce a novel and efficient method to synthetically generate training sketches at scale. We train and evaluate our model on three publicly available datasets: PubLayNet, DocLayNet and SlidesVQA, demonstrating that it outperforms state-of-the-art constraint-based methods, while offering a more intuitive design experience. In order to facilitate future sketch-to-layout research, we release O(200k) synthetically-generated sketches for the public datasets above.
Related Material
