Deep fusion network for splicing forgery localization

Bo Liu, Chi-Man Pun; Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018, pp. 0-0

Abstract


Digital splicing is a common type of image forgery: some regions of an image are replaced with contents from other images. To locate altered regions in a tampered picture is a challenging work because the difference is unknown between the altered regions and the original regions and it is thus necessary to search the large hypothesis space for a convincing result. In this paper, we proposed a novel deep fusion network to locate tampered area by tracing its border. A group of deep convolutional neural networks called Base-Net were firstly trained to response the certain type of splicing forgery respectively. Then, some layers of the Base-Net are selected and combined as a deep fusion neural network (Fusion-Net). After fine-tuning by a very small number of pictures, Fusion-Net is able to discern whether an image block is synthesized from different origins. Experiments on the benchmark datasets show that our method is effective in various situations and outperform state-of-the-art methods.

Related Material


[pdf]
[bibtex]
@InProceedings{Liu_2018_ECCV_Workshops,
author = {Liu, Bo and Pun, Chi-Man},
title = {Deep fusion network for splicing forgery localization},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}