The Contextual Loss for Image Transformation with Non-Aligned Data
Roey Mechrez, Itamar Talmi, Lihi Zelnik-Manor; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 768-783
Abstract
Feed-forward CNNs trained for image transformation problems rely on loss functions that measure the similarity between the generated image and a target image. Most of the common loss functions assume that these images are spatially aligned and compare pixels at corresponding locations. However, for many tasks, aligned training pairs of images will not be available. We present an alternative loss function that does not require alignment, thus providing an effective and simple solution for a new space of problems. Our loss is based on both context and semantics -- it compares regions with similar semantic meaning, while considering the context of the entire image. Hence, for example, when transferring the style of one face to another, it will translate eyes-to-eyes and mouth-to-mouth.
Related Material
[pdf]
[
bibtex]
@InProceedings{Mechrez_2018_ECCV,
author = {Mechrez, Roey and Talmi, Itamar and Zelnik-Manor, Lihi},
title = {The Contextual Loss for Image Transformation with Non-Aligned Data},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}