Improving Deep Detector Robustness via Detection-Related Discriminant Maximization and Reorganization

Jung Im Choi, Qizhen Lan, Qing Tian; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 1518-1527

Abstract


Deep visual detectors are known to be vulnerable to adversarial attacks raising concerns about their real-world applications (e.g. self-driving perception). We argue that this vulnerability arises from the spurious dependency of final detections on irrelevant/loophole latent dimensions. The greater the number of such dimensions the higher the likelihood of the detector being compromised by adversarial attacks making it more susceptible to input perturbations. To enhance detection robustness we propose Detection-related Discriminant Maximization and Reorganization (DDMR) condensing the detection utility to a compressed number of relevant dimensions while deactivating the influence of irrelevant ones. This approach also alleviates the misalignment issue between the two task domains in visual detection and consequently their gradients. This enables the generation of more potent adversarial attacks and defenses for visual detectors within the adversarial training framework. Extensive experiments conducted with four cutting-edge visual detectors on the KITTI and COCO datasets showcase the efficacy of the proposed approach in improving the adversarial robustness of deep visual detectors against both white-box and black-box attacks. For example on the KITTI dataset our method demonstrates an increase in robustness of up to 12.4% and 28.0% without and with adversarial training respectively.

Related Material


[pdf]
[bibtex]
@InProceedings{Choi_2025_WACV, author = {Choi, Jung Im and Lan, Qizhen and Tian, Qing}, title = {Improving Deep Detector Robustness via Detection-Related Discriminant Maximization and Reorganization}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1518-1527} }