Trusting the Computer in Computer Vision: A Privacy-Affirming Framework

Andrew Tzer-Yeu Chen; Morteza Biglari-Abhari; Kevin I-Kai Wang; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2017, pp. 56-63

Abstract


The use of surveillance cameras continues to increase, ranging from conventional applications such as law enforcement to newer scenarios with looser requirements such as gathering business intelligence. Humans still play an integral part in using and interpreting the footage from these systems, but are also a significant factor in causing unintentional privacy breaches. As computer vision methods continue to improve, we argue in this position paper that system designers should reconsider the role of machines in surveillance, and how automation can be used to help protect privacy. We explore this by discussing the impact of the human-in-the-loop, the potential for using abstraction and distributed computing to further privacy goals, and an approach for determining when video footage should be hidden from human users. We propose that in an ideal surveillance scenario, a privacy-affirming framework causes collected camera footage to be processed by computers directly, and never shown to humans. This implicitly requires humans to establish trust, to believe that computer vision systems can generate sufficiently accurate results without human supervision, so that if information about people must be gathered, unintentional data collection is mitigated as much as possible.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2017_CVPR_Workshops,
author = {Tzer-Yeu Chen; Morteza Biglari-Abhari; Kevin I-Kai Wang, Andrew},
title = {Trusting the Computer in Computer Vision: A Privacy-Affirming Framework},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}