- [pdf] [supp] [code]
Confidence-Calibrated Face Image Forgery Detection with Contrastive Representation Distillation
Face forgery detection has been increasingly investigated due to the great success of various deepfake techniques. While most existing face forgery detection methods have achieved excellent results on the test split of the same dataset or the same type of manipulations, they often do not work well on unseen datasets or unseen manipulations due to the issue of model generalization. Therefore, in this paper, we propose a novel contrastive distillation calibration (CDC) framework, which distills the contrastive representations with confidence calibration to address this generalization issue. Different from previous methods that equally treat the two forgery types, Face Swapping and Face Reenactment, we devise a dual-teacher module where the knowledge is separately learned for each forgery type. A contrastive representation learning strategy is further presented to enhance the representations of diverse forgery artifacts. To prevent the proposed model from being overconfident, we propose a novel Kullback-Leibler divergence loss with dynamic weights to moderate the dual-teacher's outputs. In addition, we introduce label smoothing to calibrate the model confidence with the target outputs. Extensive experiments on three popular datasets show that our proposed method achieves the state-of-the-art performance for cross-dataset face forgery detection.