-
[pdf]
[bibtex]@InProceedings{Korshunov_2025_ICCV, author = {Korshunov, Pavel and Vidit, Vidit and Mohammadi, Amir and Ecabert, Christophe and Shamoska, Nevena and Marcel, S\'ebastien and Yu, Zeqin and Tian, Ye and Ni, Jiangqun and Lazarevic, Lazar and Khizbullin, Renat and Evteeva, Anastasiia and Tochin, Alexey and Grishin, Aleksei and George, Anjith and Dealcala, Daniel and Endrei, Tamas and Mu\~noz-Haro, Javier and Tolosana, Ruben and Vera-Rodriguez, Ruben and Morales, Aythami and Fierrez, Julian and Cserey, Gy\"orgy and Sharma, Hardik and Chaudhary, Sachin and Dudhane, Akshay and Hambarde, Praful and Shukla, Amit and Shaily, Prateek and Kumar, Jayant and Hase, Ajinkya and Maurya, Satish and Sharma, Mridul and Dwivedi, Pallav}, title = {DeepID Challenge of Detecting Synthetic Manipulations in ID Documents}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {521-530} }
DeepID Challenge of Detecting Synthetic Manipulations in ID Documents
Abstract
An increase in AI based manipulations of ID document images threatens KYC systems widely used in online banking and other digital authentication services. DeepID challenge aimed to advance the research in the methods for detecting synthetic manipulations in ID documents. For that purpose a FantasyID dataset of both bona fide and manipulated fantasy ID cards was provided to the participants for training and tuning of their systems. Participating submissions were evaluated on a test set of FantasyID card created with both seen and unseen attacks, and on an out-of-domain private dataset of 20K real ID documents containing both genuine bona fide and manipulated samples. The challenge included two tracks 1) a binary detection track to detect whether an ID document is manipulated or not and 2) a localization track, where the goal was to identify the manipulated regions of an ID document. The evaluations were based on the F1-score metric for both detection and localization track and the submissions were ranked based on the weighted average F1-score of FantasyID (with weight 0.3) and private (with weight 0.7) test sets. With more than 100 registrations in the challenge, 26 teams have participated and 6 of them managed to beat the provided TruFor baseline method in detection track and 4 teams in the localization track. Sunlight team from Sun Yat-sen University has won both tracks of the challenge and UAM-Biometrics has ranked best in the private dataset.
Related Material
