Towards Adversarially Robust Object Detection

Haichao Zhang, Jianyu Wang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 421-430

Abstract


Object detection is an important vision task and has emerged as an indispensable component in many vision system, rendering its robustness as an increasingly important performance factor for practical applications. While object detection models have been demonstrated to be vulnerable against adversarial attacks by many recent works, very few efforts have been devoted to improving their robustness. In this work, we take an initial attempt towards this direction. We first revisit and systematically analyze object detectors and many recently developed attacks from the perspective of model robustness. We then present a multi-task learning perspective of object detection and identify an asymmetric role of task losses. We further develop an adversarial training approach which can leverage the multiple sources of attacks for improving the robustness of detection models. Extensive experiments on PASCAL-VOC and MS-COCO verified the effectiveness of the proposed approach.

Related Material


[pdf]
[bibtex]
@InProceedings{Zhang_2019_ICCV,
author = {Zhang, Haichao and Wang, Jianyu},
title = {Towards Adversarially Robust Object Detection},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}