Traffic simulation has the potential to facilitate the development and testing of autonomous vehicles, as a supplement to road testing. Since autonomous vehicles will coexist with human drivers in the transportation system for a period of time, it is important to have intelligent driving agents in traffic simulation to interact with them just like human drivers. Directly learning from human drivers' driving behavior is an attractive solution with potential. In this study, Adversarial Inverse Reinforcement Learning (AIRL) is applied to learn decision-making policies in complex and interactive traffic simulation environments with high traffic density. Bird's Eye View (BEV) is proposed as an observation model for driving agents, providing effective information for the agents' decision-making. Results show that compared with Behavioral Cloning (BC) and Proximal Policy Optimization (PPO), the driving agents generated by AIRL demonstrate higher levels of safety and robustness and they are capable of imitating the car-following and lane-changing characteristics from expert demonstrations. The results further confirm that different driving characteristics can be learned based on AIRL method.
Decision Making for Driving Agent in Traffic Simulation via Adversarial Inverse Reinforcement Learning
2023-09-24
1200838 byte
Aufsatz (Konferenz)
Elektronische Ressource
Englisch
Europäisches Patentamt | 2023
|Autonomous UAV Interception via Augmented Adversarial Inverse Reinforcement Learning
Springer Verlag | 2022
|Europäisches Patentamt | 2023
|