Modelling and simulation of pedestrian crowds require agents to reach pre-determined goals and avoid collisions with static obstacles and dynamic pedestrians, while maintaining natural gait behaviour. We model pedestrians as autonomous, learning, and reactive agents employing Reinforcement Learning (RL). Typical RL-based agent simulations suffer poor generalization due to handcrafted reward function to ensure realistic behaviour. In this work, we model pedestrians in a modular framework integrating navigation and collision-avoidance tasks as separate modules. Each such module consists of independent state-spaces and rewards, but with shared action-spaces. Empirical results suggest that such modular framework learning models can show satisfactory performance without tuning parameters, and we compare it with the state-of-art crowd simulation methods. ; QC 20190123
Pedestrian simulation as multi-objective reinforcement learning
2018-01-01
Conference paper
Electronic Resource
English
Pedestrian Collision Avoidance Using Deep Reinforcement Learning
Springer Verlag | 2022
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
British Library Conference Proceedings | 2022
|Multi-Objective Adaptive Cruise Control via Deep Reinforcement Learning
SAE Technical Papers | 2022
|