-
[pdf]
[bibtex]@InProceedings{Luqman_2025_ICCV, author = {Luqman, Hamzah and Mineo, Raffaele and Aljubran, Murtadha and Hasanaath, Ahmed Abul and Sorrenti, Amelia and Alyami, Sarah and Al-Azani, Sadam and Alowaifeer, Maad and Moon, Jihwan and Javorek, V\'aclav and \v{Z}elezn\'y, Tom\'a\v{s} and Hr\'uz, Marek and Caligiore, Gaia and Giancola, Silvio and Polikovsky, Senya and Alfarraj, Motaz and Fontana, Sabina and Mahmud, Mufti and Khan, Muhammad Haris and Islam, Kamrul and Gurbuz, Sevgi and Ragonese, Egidio and Bellitto, Giovanni and Salanitri, Federica Proietto and Spampinato, Concetto and Palazzo, Simone}, title = {The SignEval 2025 Challenge at the ICCV Multimodal Sign Language Recognition Workshop: Results and Discussion}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {5027-5036} }
The SignEval 2025 Challenge at the ICCV Multimodal Sign Language Recognition Workshop: Results and Discussion
Abstract
This paper summarizes the results of the first multimodal sign language recognition challenge, SignEval 2025, organized at ICCV 2025. The challenge featured two tracks: (i) Continuous sign language recognition (CSLR) task based on the newly curated Isharah dataset, a Saudi Sign Language dataset, and (ii) Isolated sign language recognition (ISLR) task using the MultiMeDaLIS dataset, a multimodal Italian Sign Language corpus tailored for doctor-patient communication. Two tasks are defined within the CSLR track: Signer-Independent and Unseen-Sentences. The Signer-Independent task tests the model's ability to generalize across signers, a critical property for scalable real-world CSLR systems. The Unseen-Sentences task evaluates the model's capability to recognize novel sentence compositions by leveraging learned grammar and semantics. The ISLR track utilized MultiMeDaLIS, a multi-modal dataset. The participants of this track were challenged to classify isolated signs using only radar and RGB modalities. The challenge utilized two leaderboards to showcase methods, with participants setting new benchmarks and achieving state-of-the-art results on both tracks. More information on the challenges, tasks, leaderboard, baselines and development kits are available on https://multimodal-sign-language-recognition.github.io/ICCV-2025/.
Related Material
