RelGAN: Multi-Domain Image-to-Image Translation via Relative Attributes

Po-Wei Wu, Yu-Jing Lin, Che-Han Chang, Edward Y. Chang, Shih-Wei Liao; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 5914-5922


Multi-domain image-to-image translation has gained increasing attention recently. Previous methods take an image and some target attributes as inputs and generate an output image with the desired attributes. However, such methods have two limitations. First, these methods assume binary-valued attributes and thus cannot yield satisfactory results for fine-grained control. Second, these methods require specifying the entire set of target attributes, even if most of the attributes would not be changed. To address these limitations, we propose RelGAN, a new method for multi-domain image-to-image translation. The key idea is to use relative attributes, which describes the desired change on selected attributes. Our method is capable of modifying images by changing particular attributes of interest in a continuous manner while preserving the other attributes. Experimental results demonstrate both the quantitative and qualitative effectiveness of our method on the tasks of facial attribute transfer and interpolation.

Related Material

[pdf] [supp]
author = {Wu, Po-Wei and Lin, Yu-Jing and Chang, Che-Han and Chang, Edward Y. and Liao, Shih-Wei},
title = {RelGAN: Multi-Domain Image-to-Image Translation via Relative Attributes},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}