-
[pdf]
[supp]
[bibtex]@InProceedings{Hu_2021_ICCV, author = {Hu, Shu and Ke, Lipeng and Wang, Xin and Lyu, Siwei}, title = {TkML-AP: Adversarial Attacks to Top-k Multi-Label Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {7649-7657} }
TkML-AP: Adversarial Attacks to Top-k Multi-Label Learning
Abstract
Top-k multi-label learning, which returns the top-k predicted labels from an input, has many practical applications such as image annotation, document analysis, and web search engine. However, the vulnerabilities of such algorithms with regards to dedicated adversarial perturbation attacks have not been extensively studied previously. In this work, we develop methods to create adversarial perturbations that can be used to attack top-k multi-label learning-based image annotation systems (T_kML-AP). Our methods explicitly consider the top-k ranking relation and are based on novel loss functions. Experimental evaluations on large-scale benchmark datasets including PASCAL VOC and MS COCO demonstrate the effectiveness of our methods in reducing the performance of state-of-the-art top-k multi-label learning methods, under both untargeted and targeted attacks.
Related Material