Class-Conditioned Transformation for Enhanced Robust Image Classification

Tsachi Blau, Roy Ganz, Chaim Baskin, Michael Elad, Alex Bronstein; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 6538-6547

Abstract


Robust classification methods predominantly concentrate on algorithms that address a specific threat model resulting in ineffective defenses against other threat models. Real-world applications are exposed to this vulnerability as malicious attackers might exploit alternative threat models. In this work we propose a novel test-time threat model agnostic algorithm that enhances Adversarial-Trained (AT) models. Our method operates through COnditional image transformation and DIstance-based Prediction (CODIP) and includes two main steps: First we transform the input image into each dataset class where the input image might be either clean or attacked. Next we make a prediction based on the shortest transformed distance. The conditional transformation utilizes the perceptually aligned gradients property possessed by AT models and as a result eliminates the need for additional models or additional training. Moreover it allows users to choose the desired balance between clean and robust accuracy without training. The proposed method achieves state-of-the-art results demonstrated through extensive experiments on various models AT methods datasets and attack types. Notably applying CODIP leads to substantial robust accuracy improvement of up to +23% +20% +26% and +22% on CIFAR10 CIFAR100 ImageNet and Flowers datasets respectively.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Blau_2025_WACV, author = {Blau, Tsachi and Ganz, Roy and Baskin, Chaim and Elad, Michael and Bronstein, Alex}, title = {Class-Conditioned Transformation for Enhanced Robust Image Classification}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6538-6547} }