Exploring Temporal Coherence for More General Video Face Forgery Detection

Yinglin Zheng, Jianmin Bao, Dong Chen, Ming Zeng, Fang Wen; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 15044-15054

Abstract


Although current face manipulation techniques achieve impressive performance regarding quality and controllability, they are struggling to generate temporal coherent face videos. In this work, we explore to take full advantage of the temporal coherence for video face forgery detection. To achieve this, we propose a novel end-to-end framework, which consists of two major stages. The first stage is a fully temporal convolution network (FTCN). The key insight of FTCN is to reduce the spatial convolution kernel size to 1, while maintaining the temporal convolution kernel size unchanged. We surprisingly find this special design can benefit the model for extracting the temporal features as well as improve the generalization capability. The second stage is a Temporal Transformer network, which aims to explore the long-term temporal coherence. The proposed framework is general and flexible, which can be directly trained from scratch without any pre-training models or external datasets. Extensive experiments show that our framework outperforms existing methods and remains effective when applied to detect new sorts of face forgery videos.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Zheng_2021_ICCV, author = {Zheng, Yinglin and Bao, Jianmin and Chen, Dong and Zeng, Ming and Wen, Fang}, title = {Exploring Temporal Coherence for More General Video Face Forgery Detection}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {15044-15054} }