-
[pdf]
[bibtex]@InProceedings{Poulopoulos_2021_CVPR, author = {Poulopoulos, Nikolaos and Psarakis, Emmanouil Z. and Kosmopoulos, Dimitrios}, title = {PupilTAN: A Few-Shot Adversarial Pupil Localizer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {3134-3142} }
PupilTAN: A Few-Shot Adversarial Pupil Localizer
Abstract
The eye center localization is a challenging problem faced by many computer vision applications. The challenges typically stem from the scene variability, such as, the wide range of shapes, the lighting conditions, the view angles and the occlusions. Nowadays, the increasing interest on deep neural networks requires a large volume of training data. However, a significant issue is the dependency on labeled data, which are expensive to obtain and susceptible to errors. To address these issues, we propose a deep network, dubbed PupilTAN, that performs image-to-heatmap Translation and an Adversarial training framework that solves the eye localization problem in a few-shot unsupervised way. The key idea is to estimate, by using only a few ground-truth shots, the heatmaps centers' pdf and use it as a generator to create random heatmaps that follow the same probability distribution of the real ones. We showcase that training the deep network with these artificial heatmaps in an adversarial framework not only makes us less dependent on labeled data, but also leads to a significant accuracy improvement. The proposed network achieves realtime performance in a general-purpose computer environment and improves the state-of-the-art accuracy for both MUCT and BioID datasets, even compared with supervised techniques. Furthermore, our model is robust even in the case of reducing its size of up to 1/16 of the original network (0.2M parameters), demonstrating comparable accuracy to the state-of-the-art with high practical value to real-time applications.
Related Material