Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study

Zhenyu Wu, Zhangyang Wang, Zhaowen Wang, Hailin Jin; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 606-624

Abstract


This paper aims to improve privacy-preserving visual recognition, an increasingly demanded feature in smart camera applications, by formulating a unique adversarial training framework. The proposed framework explicitly learns a degradation transform for the original video inputs, in order to optimize the trade-off between target task performance and the associated privacy budgets on the degraded video. A notable challenge is that the privacy budget, often defined and measured in task-driven contexts, cannot be reliably indicated using any single model performance, because a strong protection of privacy has to sustain against any possible model that tries to hack privacy information. Such an uncommon situation has motivated us to propose two strategies to enhance the generalization of the learned degradation on protecting privacy against unseen hacker models. Novel training strategies and evaluation protocols have been designed accordingly. Two experiments on privacy-preserving action recognition, with privacy budgets defined in various ways, manifest the compelling effectiveness of the proposed framework in simultaneously maintaining high target task (action recognition) performance while suppressing the privacy breach risk.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Wu_2018_ECCV,
author = {Wu, Zhenyu and Wang, Zhangyang and Wang, Zhaowen and Jin, Hailin},
title = {Towards Privacy-Preserving Visual Recognition via Adversarial Training: A Pilot Study},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}