-
[pdf]
[supp]
[bibtex]@InProceedings{Dutta_2025_CVPR, author = {Dutta, Anurag and Das, Arnab Kumar and Naskar, Ruchira and Chakraborty, Rajat Subhra}, title = {WaveDIF: Wavelet sub-band based Deepfake Identification in Frequency Domain}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {6311-6320} }
WaveDIF: Wavelet sub-band based Deepfake Identification in Frequency Domain
Abstract
With the more realistic convergence of Deepfakes, its' identification becomes more demanding. Recently, numerous deepfake detection techniques have been proposed, most of which are in the spatio-temporal domain. While these methods have shown promise, many of them neglect convincing artifacts that exhibit different patterns across frequency domains. This research proposes WaveDIF, a strict frequency domain, lightweight deepfake video detection algorithm using wavelet sub-band energies. In WaveDIF, for feature extraction, each video undergoes a Discrete Fourier Transform to filter out high-frequency noisy details (quite evident in deepfakes). These representations are then decomposed into their respective wavelet sub-bands --LL (Low-Low), LH (Low-High), HL (High-Low), and HH (High-High) passing them through a Haar Filter, following which the energy values (particular to each sub-band) are computed. These energy values are then used to learn a linear decision boundary (using regression analysis), which is then used for classification. This enables an interpretable, lightweight deterministic technique for the detection of synthesized videos, besides achieving an accuracy comparable to the state-of-the-art. Experimental results on popular deepfake video datasets shows over 92% accuracy for in-dataset evaluation, and 88% accuracy for cross dataset evaluation.
Related Material