Home
Scholarly Works
Bilateral Deep Reinforcement Learning Approach for...
Conference

Bilateral Deep Reinforcement Learning Approach for Better-than-human Car-following

Abstract

Car-following based on Reinforcement Learning (RL) has received attention in recent years with the goal of learning and achieving performance levels comparable to humans, based on human car following data. However, most existing RL methods model car-following as a unilateral problem, sensing only the leading vehicle ahead. For better car following performance, we propose two extensions: (1) We optimise car following for maximum efficiency, safety and comfort using Deep Reinforcement Learning (DRL), and (2) we integrate bilateral information from the vehicles in front and behind the subject vehicle into both state and reward function, inspired by the Bilateral Control Model (BCM). Furthermore, we use a decentralized multi-agent RL framework to generate the corresponding control action for each agent. Our simulation results in both closed loop and perturbation tests demonstrate that our learned policy is better than the human driving policy in terms of (a) inter-vehicle headways, (b) average speed, (c) jerk, (d) Time to Collision (TTC) and (e) string stability.

Authors

Shi T; Ai Y; ElSamadisy O; Abdulhai B

Volume

00

Pagination

pp. 3986-3992

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

October 12, 2022

DOI

10.1109/itsc55140.2022.9922023

Name of conference

2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)
View published work (Non-McMaster Users)

Contact the Experts team