Explainable AI-Generated Image Forensics: A Low-Resolution Perspective with Novel Artifact Taxonomy

Kaustubh Sharma; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2025, pp. 1576-1585

Abstract


The rapid advancement of generative AI has led to synthetic images that are increasingly indistinguishable from real ones, posing significant challenges for detection systems, especially at very low resolutions (32x32 pixels). This paper tackles two core problems in digital media forensics: (1) reliably distinguishing real from AI-generated images at low resolution, and (2) providing explainable visual reasoning for synthetic image artifacts. We augment and diversify the CIFAKE dataset and benchmark multiple classifiers, identifying ConvNeXt Tiny as the most effective model based on accuracy (98.65% on augmented CIFAKE), efficiency (13.4 ms inference per image), and adversarial robustness (96.80% post-adversarial training accuracy). To enable interpretability, we propose a novel hierarchical taxonomy of visual artifacts and introduce an explainability pipeline that combines SinSR-based super-resolution, CLIP-guided artifact detection, and vision-language models for explanation generation. Our framework offers a robust, interpretable solution for real-world low-resolution image forensics.

Related Material


[pdf]
[bibtex]
@InProceedings{Sharma_2025_ICCV, author = {Sharma, Kaustubh}, title = {Explainable AI-Generated Image Forensics: A Low-Resolution Perspective with Novel Artifact Taxonomy}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {1576-1585} }