Inaccuracy of State-Action Value Function for Non-Optimal Actions in Adversarially Trained Deep Neural Policies

Ezgi Korkmaz; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 2323-2327

Abstract


The introduction of deep neural networks as function approximator for the state-action value function has led to the creation of a new research area for self-learning systems that explore policies from high dimensional input. While the success of deep neural policies has resulted in the deployment of these policies in diversified application domains, there are significant concerns regarding their robustness towards specifically crafted malicious perturbations introduced to their inputs. Several studies have focused on making deep neural policies resistant to such perturbations via training with the existence of these perturbations (i.e. adversarial training). In this paper we focus on conducting an investigation on the state-action value function learned by state-of-the-art adversarially trained deep neural policies and vanilla trained deep neural policies. We perform several experiments in the OpenAI Baselines and we show that the state-action value functions learned by vanilla trained deep neural policies have better estimates for the non-optimal actions than the state-of-the-art adversarially trained deep neural policies. We believe our study lays out intriguing properties of adversarial training and could be critical step towards obtaining robust and reliable policies.

Related Material


[pdf]
[bibtex]
@InProceedings{Korkmaz_2021_CVPR, author = {Korkmaz, Ezgi}, title = {Inaccuracy of State-Action Value Function for Non-Optimal Actions in Adversarially Trained Deep Neural Policies}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {2323-2327} }