Image Shape Manipulation From a Single Augmented Training Sample

Yael Vinker, Eliahu Horwitz, Nir Zabari, Yedid Hoshen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13769-13778

Abstract


In this paper, we present DeepSIM, a generative model for conditional image manipulation based on a single image. We find that extensive augmentation is key for enabling single image training, and incorporate the use of thin-plate-spline (TPS) as an effective augmentation. Our network learns to map between a primitive representation of the image to the image itself. The choice of a primitive representation has an impact on the ease and expressiveness of the manipulations and can be automatic (e.g. edges), manual (e.g. segmentation) or hybrid such as edges on top of segmentations. At manipulation time, our generator allows for making complex image changes by modifying the primitive input representation and mapping it through the network. Our method is shown to achieve remarkable performance on image manipulation tasks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Vinker_2021_ICCV, author = {Vinker, Yael and Horwitz, Eliahu and Zabari, Nir and Hoshen, Yedid}, title = {Image Shape Manipulation From a Single Augmented Training Sample}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {13769-13778} }