Intrinsically-Interpretable Siamese Networks for Identity Recognition

Marco A. Rocha, Jaime S. Cardoso, Helena Montenegro; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2025, pp. 5942-5951

Abstract


Deep learning models have excelled in computer vision tasks in the past decade, but their lack of transparency raises ethical and legal concerns, especially in high-stakes areas such as surveillance and law enforcement. As such, regulations like the European Union's General Data Protection Regulation are now demanding interpretable Artificial Intelligence systems. This paper focuses on automatic face recognition, where existing systems lack interpretability and research into explainable alternatives is limited. To address this gap, we propose two interpretable facial verification models based on Siamese Networks that match and compare semantically-aligned local regions in the images. Experiments show these models rival and even outperform traditional baselines while offering clearer, more accountable explanations, advancing ethical and legally compliant facial recognition.

Related Material


[pdf]
[bibtex]
@InProceedings{Rocha_2025_ICCV, author = {Rocha, Marco A. and Cardoso, Jaime S. and Montenegro, Helena}, title = {Intrinsically-Interpretable Siamese Networks for Identity Recognition}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {5942-5951} }