Generating Manga From Illustrations via Mimicking Manga Creation Workflow

Lvmin Zhang, Xinrui Wang, Qingnan Fan, Yi Ji, Chunping Liu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5642-5651

Abstract


We present a framework to generate manga from digital illustrations. In professional mange studios, the manga create workflow consists of three key steps: (1) Artists use line drawings to delineate the structural outlines in manga storyboards. (2) Artists apply several types of regular screentones to render the shading, occlusion, and object materials. (3) Artists selectively paste irregular screen textures onto the canvas to achieve various background layouts or special effects. Motivated by this workflow, we propose a data-driven framework to convert a digital illustration into three corresponding components: manga line drawing, regular screentone, and irregular screen texture. These components can be directly composed into manga images and can be further retouched for more plentiful manga creations. To this end, we create a large-scale dataset with these three components annotated by artists in a human-in-the-loop manner. We conduct both perceptual user study and qualitative evaluation of the generated manga, and observe that our generated image layers for these three components are practically usable in the daily works of manga artists. We provide 60 qualitative results and 15 additional comparisons in the supplementary material. We will make our presented manga dataset publicly available to assist related applications.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2021_CVPR, author = {Zhang, Lvmin and Wang, Xinrui and Fan, Qingnan and Ji, Yi and Liu, Chunping}, title = {Generating Manga From Illustrations via Mimicking Manga Creation Workflow}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {5642-5651} }