Bring Clipart to Life

Nanxuan Zhao, Shengqi Dang, Hexun Lin, Yang Shi, Nan Cao; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 23341-23350

Abstract


The development of face editing has been boosted since the birth of StyleGAN. While previous works have explored different interactive methods, such as sketching and exemplar photos, they have been limited in terms of expressiveness and generality. In this paper, we propose a new interaction method by guiding the editing with abstract clipart, composed of a set of simple semantic parts, allowing users to control across face photos with simple clicks. However, this is a challenging task given the large domain gap between colorful face photos and abstract clipart with limited data. To solve this problem, we introduce a framework called ClipFaceShop built on top of StyleGAN. The key idea is to take advantage of W+ latent code encoded rich and disentangled visual features, and create a new lightweight selective feature adaptor to predict a modifiable path toward the target output photo. Since no pairwise labeled data exists for training, we design a set of losses to provide supervision signals for learning the modifiable path. Experimental results show that ClipFaceShop generates realistic and faithful face photos, sharing the same facial attributes as the reference clipart. We demonstrate that ClipFaceShop supports clipart in diverse styles, even in form of a free-hand sketch.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Zhao_2023_ICCV, author = {Zhao, Nanxuan and Dang, Shengqi and Lin, Hexun and Shi, Yang and Cao, Nan}, title = {Bring Clipart to Life}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {23341-23350} }