Localin Reshuffle Net: Toward Naturally and Efficiently Facial Image Blending

Chengyao Zheng, Siyu Xia, Joseph Robinson, Changsheng Lu, Wayne Wu, Chen Qian, Ming Shao; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020


The blending of facial images is an effective way to fuse attributes such that the synthesis is robust to the finer details (e.g., periocular-region, nostrils, hairlines). Specifically, facial blending aims to transfer the style of a source image to a target such that violations in the natural appearance are minimized. Despite the many practical applications, facial image blending remains mostly unexplored with the reasons being two-fold: 1) the lack of quality paired data for supervision and 2) facial synthesizers (i.e., the models) are sensitive to small variations in lighting, texture, resolution and age. We address the reasons for the bottleneck by first building Facial Pairs to Blend (FPB) dataset, which was generated through our facial attribute optimization algorithm. Then, we propose an effective normalization scheme to capture local statistical information during blending: namely, Local Instance Normalization (LAN). Lastly, a novel local-reshuffle-layer is designed to map local patches in the feature space, which can be learned in an end-to-end fashion with dedicated loss. This new layer is essential for the proposed Localin Reshuffle Network (LRNet). Extensive experiments, and both quantitative and qualitative results, demonstrate that our approach outperforms existing methods.

Related Material

@InProceedings{Zheng_2020_ACCV, author = {Zheng, Chengyao and Xia, Siyu and Robinson, Joseph and Lu, Changsheng and Wu, Wayne and Qian, Chen and Shao, Ming}, title = {Localin Reshuffle Net: Toward Naturally and Efficiently Facial Image Blending}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }