Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation

Daichi Horita, Naoto Inoue, Kotaro Kikuchi, Kota Yamaguchi, Kiyoharu Aizawa; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 67-76

Abstract


Content-aware graphic layout generation aims to automatically arrange visual elements along with a given content such as an e-commerce product image. In this paper we argue that the current layout generation approaches suffer from the limited training data for the high-dimensional layout structure. We show that a simple retrieval augmentation can significantly improve the generation quality. Our model which is named Retrieval-Augmented Layout Transformer (RALF) retrieves nearest neighbor layout examples based on an input image and feeds these results into an autoregressive generator. Our model can apply retrieval augmentation to various controllable generation tasks and yield high-quality layouts within a unified architecture. Our extensive experiments show that RALF successfully generates content-aware layouts in both constrained and unconstrained settings and significantly outperforms the baselines.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Horita_2024_CVPR, author = {Horita, Daichi and Inoue, Naoto and Kikuchi, Kotaro and Yamaguchi, Kota and Aizawa, Kiyoharu}, title = {Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {67-76} }