Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting

Donghyeon Cho, Jinsun Park, Tae-Hyun Oh, Yu-Wing Tai, In So Kweon; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4558-4567

Abstract


This paper proposes a weakly- and self-supervised deep convolutional neural network (WSSDCNN) for content-aware image retargeting. Our network takes a source image and a target aspect ratio, and then directly outputs a retargeted image. Retargeting is performed through a shift map, which is a pixel-wise mapping from the source to the target grid. Our method implicitly learns an attention map, which leads to a content-aware shift map for image retargeting. As a result, discriminative parts in an image are preserved, while background regions are adjusted seamlessly. In the training phase, pairs of an image and its image level annotation are used to compute content and structure losses. We demonstrate the effectiveness of our proposed method for a retargeting application with insightful analyses.

Related Material


[pdf] [arXiv] [video]
[bibtex]
@InProceedings{Cho_2017_ICCV,
author = {Cho, Donghyeon and Park, Jinsun and Oh, Tae-Hyun and Tai, Yu-Wing and So Kweon, In},
title = {Weakly- and Self-Supervised Learning for Content-Aware Deep Image Retargeting},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}