-
[pdf]
[supp]
[bibtex]@InProceedings{Karim_2025_WACV, author = {Karim, Hamza and Yilmaz, Yasin}, title = {Invisibility Cloak: Hiding Anomalies in Videos via Adversarial Machine Learning Attacks}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {February}, year = {2025}, pages = {344-353} }
Invisibility Cloak: Hiding Anomalies in Videos via Adversarial Machine Learning Attacks
Abstract
Video anomaly detection (VAD) plays a crucial role in various fields providing an invaluable tool for enhancing security safety and operational efficiency. Video anomaly detection systems are designed to identify irregular patterns unusual behaviors or unexpected events within a given video stream. Among these weakly supervised VAD (wVAD) systems have gained significant popularity due to their ability to leverage anomalous video samples (i.e. weak labels) which can be easily obtained in many applications unlike anomalous frame samples (i.e. fully labelled data). The superior performance of wVAD systems compared to unsupervised VAD methods makes wVAD systems particularly attractive in real-world applications of security surveillance and content moderation for online video streaming platforms. The potential use of wVAD systems in such critical applications also raises concerns regarding potential adversarial machine learning attacks. Adversaries may exploit vulnerabilities within these systems to evade detection posing significant risks to the security and integrity of the system. This study explores the vulnerabilities of wVAD systems by comprehensively analyzing the weaknesses of these systems under a white-box setting. We propose a metric for quantifying the efficacy of such attacks and show that practical attacks can achieve up to 99% success rate in hiding anomalies.
Related Material