Decoder Network Over Lightweight Reconstructed Feature for Fast Semantic Style Transfer

Ming Lu, Hao Zhao, Anbang Yao, Feng Xu, Yurong Chen, Li Zhang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2469-2477

Abstract


Recently, the community of style transfer is trying to incorporate semantic information into traditional system. This practice achieves better perceptual results by transferring the style between semantically-corresponding regions. Yet, few efforts are invested to address the computation bottleneck of back-propagation. In this paper, we propose a new framework for fast semantic style transfer. Our method decomposes the semantic style transfer problem into feature reconstruction part and feature decoder part. The reconstruction part tactfully solves the optimization problem of content loss and style loss in feature space by particularly reconstructed feature. This significantly reduces the computation of propagating the loss through the whole network. The decoder part transforms the reconstructed feature into the stylized image. Through a careful bridging of the two modules, the proposed approach not only achieves competitive results as backward optimization methods but also is about two orders of magnitude faster.

Related Material


[pdf]
[bibtex]
@InProceedings{Lu_2017_ICCV,
author = {Lu, Ming and Zhao, Hao and Yao, Anbang and Xu, Feng and Chen, Yurong and Zhang, Li},
title = {Decoder Network Over Lightweight Reconstructed Feature for Fast Semantic Style Transfer},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}