Extracting camera-based fingerprints for video forensics

Davide Cozzolino Giovanni Poggi Luisa Verdoliva; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2019, pp. 130-137

Abstract


Video source attribution is an important operation in forensics applications. Identifying which specific device or camera model took a video can help in authorship verification, but can be also a precious source of information for detecting a possible manipulation. The key observation is that any physical device leaves peculiar traces in the acquired content, a sort of fingerprint that can be exploited to establish data provenance. Moreover, absence or modification of such traces may reveal a possible manipulation. In this paper, inspired by recent work on images, we train a neural network that enhances the model-related traces hidden in a video, extracting a sort of camera fingerprint, called video noiseprint. The net is trained on pristine videos with a Siamese strategy, minimizing distances between same-model patches, and maximizing distances between unrelated patches. Experiments show that methods based on video noiseprints perform well in major forensic tasks, such as camera model identification and video forgery localization, with no need of prior knowledge on the specific manipulation or any form of fine-tuning.

Related Material


[pdf]
[bibtex]
@InProceedings{Verdoliva_2019_CVPR_Workshops,
author = {Cozzolino Giovanni Poggi Luisa Verdoliva, Davide},
title = {Extracting camera-based fingerprints for video forensics},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}