FullCycle: Full Stage Adversarial Attack For Reinforcement Learning Robustness Evaluation

Zhenshu Ma, Xuan Cai, Changhang Tian, Yuqi Fan, Kemou Jiang, Gangfu Liu, Xuesong Bai, Aoyong Li, Yilong Ren, Haiyang Yu; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops, 2025, pp. 3554-3560

Abstract


Recent advances in deep reinforcement learning (DRL) have demonstrated significant potential in applications such as autonomous driving and embodied intelligence. However, these large-scale, multi-parametric DRL models remain vulnerable to adversarial examples, while their prolonged training durations incur substantial temporal and economic costs.Current methods primarily focus on adversarial attacks during isolated training phases, whereas practical implementations may face interference across all training stages. To address this gap, we propose FullCycle, a full stage adversarial attack method that systematically assesses DRL robustness by injecting perturbations throughout the complete training pipeline. Experimental results reveal that introducing FullCycle adversarially perturbs algorithm convergence speed and agent performance to varying degrees. This work establishes a novel paradigm for robustness evaluation in reinforcement learning systems.

Related Material


[pdf]
[bibtex]
@InProceedings{Ma_2025_CVPR, author = {Ma, Zhenshu and Cai, Xuan and Tian, Changhang and Fan, Yuqi and Jiang, Kemou and Liu, Gangfu and Bai, Xuesong and Li, Aoyong and Ren, Yilong and Yu, Haiyang}, title = {FullCycle: Full Stage Adversarial Attack For Reinforcement Learning Robustness Evaluation}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {3554-3560} }