Learning to Identify While Failing to Discriminate

Jure Sokolic, Qiang Qiu, Miguel R. D. Rodrigues, Guillermo Sapiro; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2537-2544

Abstract


Privacy and fairness are critical in computer vision applications, in particular when dealing with human identification. Achieving a universally secure, private, and fair systems is practically impossible as the exploitation of additional data can reveal private information in the original one. Faced with this challenge, we propose a new line of research, where the privacy is learned and used in a closed environment. The goal is to ensure that a given entity, trusted to infer certain information with our data, is blocked from inferring protected information from it. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task (face verification) is harder than the negative one (gender classification). The framework opens the door to privacy and fairness in very important closed scenarios, ranging from private data accumulation companies to law-enforcement and hospitals.

Related Material


[pdf]
[bibtex]
@InProceedings{Sokolic_2017_ICCV,
author = {Sokolic, Jure and Qiu, Qiang and Rodrigues, Miguel R. D. and Sapiro, Guillermo},
title = {Learning to Identify While Failing to Discriminate},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}