CLIPascene: Scene Sketching with Different Types and Levels of Abstraction

Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, Ariel Shamir; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4146-4156

Abstract


In this paper, we present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction. We distinguish between two types of abstraction. The first considers the fidelity of the sketch, varying its representation from a more precise portrayal of the input to a looser depiction. The second is defined by the visual simplicity of the sketch, moving from a detailed depiction to a sparse sketch. Using an explicit disentanglement into two abstraction axes --- and multiple levels for each one --- provides users additional control over selecting the desired sketch based on their personal goals and preferences. To form a sketch at a given level of fidelity and simplification, we train two MLP networks. The first network learns the desired placement of strokes, while the second network learns to gradually remove strokes from the sketch without harming its recognizability and semantics. Our approach is able to generate sketches of complex scenes including those with complex backgrounds (e.g. natural and urban settings) and subjects (e.g. animals and people) while depicting gradual abstractions of the input scene in terms of fidelity and simplicity.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Vinker_2023_ICCV, author = {Vinker, Yael and Alaluf, Yuval and Cohen-Or, Daniel and Shamir, Ariel}, title = {CLIPascene: Scene Sketching with Different Types and Levels of Abstraction}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {4146-4156} }