Reducing the Content Bias for AI-Generated Image Detection

Seoyeon Gye, Junwon Ko, Hyounguk Shon, Minchan Kwon, Junmo Kim; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 399-408

Abstract


Identifying AI-generated content is critical for the safe and ethical use of generative AI. Recent research has focused on developing detectors that generalize to unknown generators with popular methods relying either on high-level features or low-level fingerprints. However these methods have clear limitations: biased towards unseen content or vulnerable to common image degradations such as JPEG compression. To address these issues we propose a novel approach SFLD which incorporates PatchShuffle to integrate high-level semantic and low-level textural information. SFLD applies PatchShuffle at multiple levels improving robustness and generalization across various generative models. Additionally current benchmarks face challenges such as low image quality insufficient content preservation and limited class diversity. In response we introduce TwinSynths a new benchmark generation methodology that constructs visually near-identical pairs of real and synthetic images to ensure high quality and content preservation. Our extensive experiments and analysis show that SFLD outperforms existing methods on detecting a wide variety of fake images sourced from GANs diffusion models and TwinSynths demonstrating the state-of-the-art performance and generalization capabilities to novel generative models.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Gye_2025_WACV, author = {Gye, Seoyeon and Ko, Junwon and Shon, Hyounguk and Kwon, Minchan and Kim, Junmo}, title = {Reducing the Content Bias for AI-Generated Image Detection}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {399-408} }