Automatic Controllable Colorization via Imagination

Xiaoyan Cong, Yue Wu, Qifeng Chen, Chenyang Lei; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 2609-2619

Abstract


We propose a framework for automatic colorization that allows for iterative editing and modifications. The core of our framework lies in an imagination module: by understanding the content within a grayscale image we utilize a pre-trained image generation model to generate multiple images that contain the same content. These images serve as references for coloring mimicking the process of human experts. As the synthesized images can be imperfect or different from the original grayscale image we propose a Reference Refinement Module to select the optimal reference composition. Unlike most previous end-to-end automatic colorization algorithms our framework allows for iterative and localized modifications of the colorization results because we explicitly model the coloring samples. Extensive experiments demonstrate the superiority of our framework over existing automatic colorization algorithms in editability and flexibility. Project page: https://xy-cong.github.io/imagine-colorization/.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Cong_2024_CVPR, author = {Cong, Xiaoyan and Wu, Yue and Chen, Qifeng and Lei, Chenyang}, title = {Automatic Controllable Colorization via Imagination}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {2609-2619} }