Environment Agnostic Representation for Visual Reinforcement Learning

Hyesong Choi, Hunsang Lee, Seongwon Jeong, Dongbo Min; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 263-273

Abstract


Generalization capability of vision-based deep reinforcement learning (RL) is indispensable to deal with dynamic environment changes that exist in visual observations. The high-dimensional space of the visual input, however, imposes challenges in adapting an agent to unseen environments. In this work, we propose Environment Agnostic Reinforcement learning (EAR), which is a compact framework for domain generalization of the visual deep RL. Environment-agnostic features (EAFs) are extracted by leveraging three novel objectives based on feature factorization, reconstruction, and episode-aware state shifting, so that policy learning is accomplished only with vital features. EAR is a simple single-stage method with a low model complexity and a fast inference time, ensuring a high reproducibility, while attaining state-of-the-art performance in the DeepMind Control Suite and DrawerWorld benchmarks.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Choi_2023_ICCV, author = {Choi, Hyesong and Lee, Hunsang and Jeong, Seongwon and Min, Dongbo}, title = {Environment Agnostic Representation for Visual Reinforcement Learning}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {263-273} }