Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following

Yutong Feng, Biao Gong, Di Chen, Yujun Shen, Yu Liu, Jingren Zhou; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 4744-4753

Abstract


Existing text-to-image (T2I) diffusion models usually struggle in interpreting complex prompts especially those with quantity object-attribute binding and multi-subject descriptions. In this work we introduce a semantic panel as the middleware in decoding texts to images supporting the generator to better follow instructions. The panel is obtained through arranging the visual concepts parsed from the input text by the aid of large language models and then injected into the denoising network as a detailed control signal to complement the text condition. To facilitate text-to-panel learning we come up with a carefully designed semantic formatting protocol accompanied by a fully-automatic data preparation pipeline. Thanks to such a design our approach which we call Ranni manages to enhance a pre-trained T2I generator regarding its textual controllability. More importantly the introduction of the generative middleware brings a more convenient form of interaction (i.e. directly adjusting the elements in the panel or using language instructions) and further allows users to finely customize their generation based on which we develop a practical system and showcase its potential in continuous generation and chatting-based editing.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Feng_2024_CVPR, author = {Feng, Yutong and Gong, Biao and Chen, Di and Shen, Yujun and Liu, Yu and Zhou, Jingren}, title = {Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {4744-4753} }