Car-following based on Reinforcement Learning (RL) has received attention in recent years with the goal of learning and achieving performance levels comparable to humans, based on human car following data. However, most existing RL methods model car-following as a unilateral problem, sensing only the leading vehicle ahead. For better car following performance, we propose two extensions: (1) We optimise car following for maximum efficiency, safety and comfort using Deep Reinforcement Learning (DRL), and (2) we integrate bilateral information from the vehicles in front and behind the subject vehicle into both state and reward function, inspired by the Bilateral Control Model (BCM). Furthermore, we use a decentralized multi-agent RL framework to generate the corresponding control action for each agent. Our simulation results in both closed loop and perturbation tests demonstrate that our learned policy is better than the human driving policy in terms of (a) inter-vehicle headways, (b) average speed, (c) jerk, (d) Time to Collision (TTC) and (e) string stability.


    Access

    Check access

    Check availability in my library

    Order at Subito €


    Export, share and cite



    Title :

    Bilateral Deep Reinforcement Learning Approach for Better-than-human Car-following


    Contributors:


    Publication date :

    2022-10-08


    Size :

    730249 byte




    Type of media :

    Conference paper


    Type of material :

    Electronic Resource


    Language :

    English



    Proactive Car-Following Using Deep-Reinforcement Learning

    Yen, Yi-Tung / Chou, Jyun-Jhe / Shih, Chi-Sheng et al. | IEEE | 2020


    Modelling personalised car-following behaviour: a memory-based deep reinforcement learning approach

    Liao, Yaping / Yu, Guizhen / Chen, Peng et al. | Taylor & Francis Verlag | 2024


    Towards robust car-following based on deep reinforcement learning

    Hart, Fabian / Okhrin, Ostap / Treiber, Martin | Elsevier | 2024


    Deep Reinforcement Learning for Concentric Tube Robot Path Following

    Iyengar, Keshav / Spurgeon, Sarah / Stoyanov, Danail | BASE | 2023

    Free access

    Dynamic Car-following Model Calibration with Deep Reinforcement Learning

    Naing, Htet / Cai, Wentong / Wu, Tiantian et al. | IEEE | 2022