Multi-style Generative Network for Real-time Transfer

Hang Zhang, Kristin Dana; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multi-style or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSG-Net), which achieves real-time performance. In addition, we employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionally-strided convolution. Our method has achieved superior image quality comparing to stateof-the-art approaches. The proposed MSG-Net as a general approach for realtime style transfer is compatible with most existing techniques including contentstyle interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feedforward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available1.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Zhang_2018_ECCV_Workshops,
author = {Zhang, Hang and Dana, Kristin},
title = {Multi-style Generative Network for Real-time Transfer},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}