Benchmarking Image Classifiers for Physical Out-of-Distribution Examples Detection
The rising popularity of deep neural networks (DNNs) in computer vision has raised concerns about their robustness in the real world. Recent works in this field have well-demonstrated the vulnerability of these networks to carefully crafted adversarial attacks which yield out-of-distribution (OOD) samples. Interestingly, the majority of the existing literature focuses on adversarial attacks crafted for the digital domain only. Physical adversarial attacks are easier to deploy in the real world and yield higher attack success than digital perturbations. The prime limitation of such a dearth of studies handling physical out-of-distribution images is the lack of benchmark datasets. To overcome this limitation, this research proposes a novel out-of-distribution dataset using adversarial patches of different variations to advance the robustness of deep networks against such stealthy out-of-distribution images. We have also conducted extensive experiments both under seen and unseen patch settings and observed that unseen adversarial patches are hard to defend. By conducting this study and delving into the complexities of defending against patch attacks, we believe it will serve as inspiration for future researchers to incorporate physical OOD attacks into their defense strategies.