-
[pdf]
[supp]
[bibtex]@InProceedings{Tang_2024_CVPR, author = {Tang, Zineng and Yang, Ziyi and Khademi, Mahmoud and Liu, Yang and Zhu, Chenguang and Bansal, Mohit}, title = {CoDi-2: In-Context Interleaved and Interactive Any-to-Any Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {27425-27434} }
CoDi-2: In-Context Interleaved and Interactive Any-to-Any Generation
Abstract
We present CoDi-2 a Multimodal Large Language Model (MLLM) for learning in-context interleaved multimodal representations. By aligning modalities with language for both encoding and generation CoDi-2 empowers Large Language Models (LLMs) to understand modality-interleaved instructions and in-context examples and autoregressively generate grounded and coherent multimodal outputs in an any-to-any input-output modality paradigm. To train CoDi-2 we build a large-scale generation dataset encompassing in-context multimodal instructions across text vision and audio. CoDi-2 demonstrates a wide range of zero-shot and few-shot capabilities for tasks like editing exemplar learning composition reasoning etc. CoDi-2 surpasses previous domain-specific models on tasks such as subject-driven image generation vision transformation and audio editing and showcases a significant advancement for integrating diverse multimodal tasks with sequential generation.
Related Material