Multi-task Learning with Future States for Vision-based Autonomous Driving

Inhan Kim, Hyemin Lee, Joonyeong Lee, Eunseop Lee, Daijin Kim; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Human drivers consider past and future driving environments to maintain stable control of a vehicle. To adopt a human driver's behavior, we propose a vision-based autonomous driving model, called Future Actions and States Network (FASNet), which uses predicted future actions and generated future states in multi-task learning manner. Future states are generated using an enhanced deep predictive-coding network and motion equations dened by the kinematic vehicle model. The nal control values are determined by the weighted average of thepredicted actions for a stable decision. With these methods, the proposed FASNet has a high generalization ability in unseen environments. To validate the proposed FASNet, we conducted several experiments, including ablation studies in realistic three-dimensional simulations. FASNet achieves a higher Success Rate (SR) on the recent CARLA benchmarks under several conditions as compared to state-of-the-art models.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Kim_2020_ACCV, author = {Kim, Inhan and Lee, Hyemin and Lee, Joonyeong and Lee, Eunseop and Kim, Daijin}, title = {Multi-task Learning with Future States for Vision-based Autonomous Driving}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }