OmniControlNet: Dual-stage Integration for Conditional Image Generation

Yilin Wang, Haiyang Xu, Xiang Zhang, Zeyuan Chen, Zhizhou Sha, Zirui Wang, Zhuowen Tu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 7436-7448

Abstract


We provide a two-way integration for the widely-adopted ControlNet by integrating external condition generation algorithms into a single dense prediction method and by integrating its individually trained image generation processes into a single model. Despite its tremendous success the ControlNet of a two-stage pipeline bears limitations in being not self-contained (e.g. calls the external condition generation algorithms) with a large model redundancy (separately trained models for different types of conditioning inputs). Our proposed OmniControlNet integrates: 1) the condition generation (e.g. HED edges depth maps user scribble and animal pose) by a single multi-tasking dense prediction algorithm under the task embedding guidance and 2) the image generation process for different conditioning types under the textual embedding guidance. OmniControlNet achieves significantly reduced model complexity and redundancy while being able to produce images of comparable quality for conditioned text-to-image generation.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Yilin and Xu, Haiyang and Zhang, Xiang and Chen, Zeyuan and Sha, Zhizhou and Wang, Zirui and Tu, Zhuowen}, title = {OmniControlNet: Dual-stage Integration for Conditional Image Generation}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7436-7448} }