Deep Image Blending

Lingzhi Zhang, Tarmily Wen, Jianbo Shi; The IEEE Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 231-240

Abstract


Image composition is an important operation to create visual content. Among image composition tasks, image blending aims to seamlessly blend an object from a source image onto a target image with lightly mask adjustment. A popular approach is Poisson image blending, which enforces the gradient domain smoothness in the composite image. However, this approach only considers the boundary pixels of target image, and thus can not adapt to texture of target image. In addition, the colors of the target image often seep through the original source object too much causing a significant loss of content of the source object. We propose a Poisson blending loss that achieves the same purpose of Poisson image blending. In addition, we jointly optimize the proposed Poisson blending loss as well as the style and content loss computed from a deep network, and reconstruct the blending region by iteratively updating the pixels using the L-BFGS solver. In the blending image, we not only smooth out gradient domain of the blending boundary but also add consistent texture into the blending region. User studies show that our method can outperform strong baselines as well as state-of-the-art approaches when placing objects onto both paintings and real-world images.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2020_WACV,
author = {Zhang, Lingzhi and Wen, Tarmily and Shi, Jianbo},
title = {Deep Image Blending},
booktitle = {The IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}