Few-Shot Font Style Transfer Between Different Languages

Chenhao Li, Yuta Taniguchi, Min Lu, Shin'ichi Konomi; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 433-442

Abstract


In this paper, we propose a novel model FTransGAN that can transfer font styles between different languages by observing only a few samples. The automatic generation of a new font library is a challenging task and has been attracting many researchers' interests. Most previous works addressed this problem by transferring the style of the given subset to the content of unseen ones. Nevertheless, they only focused on the font style transfer in the same language. In many tasks, we need to learn the font information from one language and then apply it to other languages. It's difficult for the existing methods to do such tasks. To solve this problem, we specifically design our network into a multi-level attention form to capture both local and global features of the style images. To verify the generative ability of our model, we construct an experimental font dataset which includes 847 fonts, each of them containing English and Chinese characters with the same style. Experimental results show that compared with the state-of-the-art models, our model generates 80.3% of all user preferred images.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Li_2021_WACV, author = {Li, Chenhao and Taniguchi, Yuta and Lu, Min and Konomi, Shin'ichi}, title = {Few-Shot Font Style Transfer Between Different Languages}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {433-442} }