How Do Deepfakes Move? Motion Magnification for Deepfake Source Detection

Ilke Demir, Umur Aybars Çiftçi; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 4780-4790

Abstract


With the proliferation of deep generative models, deepfakes are improving in quality and quantity everyday. However, there are subtle authenticity signals in pristine videos, not replicated by current generative models. We contrast the movement in deepfakes and authentic videos by motion magnification towards building a generalized deepfake source detector. The sub-muscular motion in faces has different interpretations per different generative models, which is reflected in their generative residue. Our approach exploits the difference between real motion and the amplified generative artifacts, by combining deep and traditional motion magnification, to detect whether a video is fake and its source generator if so. Evaluating our approach on two multi-source datasets, we obtain 97.77% and 94.03% for video source detection. Our approach performs at least 4.08% better than the prior deepfake source detector and other complex architectures. We also analyze magnification amount, phase extraction window, backbone network, sample counts, and sample lengths. Finally, we report our results on skin tones and genders to assess the model bias.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Demir_2024_WACV, author = {Demir, Ilke and \c{C}ift\c{c}i, Umur Aybars}, title = {How Do Deepfakes Move? Motion Magnification for Deepfake Source Detection}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {4780-4790} }