-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Geng_2024_CVPR, author = {Geng, Zigang and Yang, Binxin and Hang, Tiankai and Li, Chen and Gu, Shuyang and Zhang, Ting and Bao, Jianmin and Zhang, Zheng and Li, Houqiang and Hu, Han and Chen, Dong and Guo, Baining}, title = {InstructDiffusion: A Generalist Modeling Interface for Vision Tasks}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {12709-12720} }
InstructDiffusion: A Generalist Modeling Interface for Vision Tasks
Abstract
We present InstructDiffusion a unified and generic framework for aligning computer vision tasks with human instructions. Unlike existing approaches that integrate prior knowledge and pre-define the output space (e.g. categories and coordinates) for each vision task we cast diverse vision tasks into a human-intuitive image-manipulating process whose output space is a flexible and interactive pixel space. Concretely the model is built upon the diffusion process and is trained to predict pixels according to user instructions such as encircling the man's left shoulder in red or applying a blue mask to the left car. InstructDiffusion could handle a variety of vision tasks including understanding tasks (such as segmentation and keypoint detection) and generative tasks (such as editing and enhancement) and outperforms prior methods on novel datasets. This represents a solid step towards a generalist modeling interface for vision tasks advancing artificial general intelligence in the field of computer vision.
Related Material