Home
Scholarly Works
Evaluating Reinforcement Learning State...
Journal article

Evaluating Reinforcement Learning State Representations for Adaptive Traffic Signal Control

Abstract

Reinforcement learning has shown potential for developing effective adaptive traffic signal controllers to reduce traffic congestion and improve mobility. Despite many successful research studies, few of these ideas have been implemented in practice. There remains uncertainty about what the requirements are in terms of data and sensors to actualize reinforcement learning traffic signal control. We seek to understand the data requirements and the performance differences in different state representations for reinforcement learning traffic signal control. We model three state representations, from low to high-resolution, and compare their performance using the asynchronous advantage actor-critic and distributional Qlearning algorithms with neural network function approximation in simulation. Results show that low-resolution state representations (e.g., occupancy and average speed) perform almost identically to high-resolution state representations (e.g., individual vehicle position and speed) using fully connected neural networks, but deep neural networks with high-resolution state representation achieve the best performance. These results indicate implementing reinforcement learning traffic signal controllers in practice can be accomplished with a variety of sensors (e.g., loop detectors, cameras, radar).

Authors

Genders W; Razavi S

Journal

International Journal of Traffic and Transportation Management, Vol. 01, No. 1,

Publisher

International Association for Sharing Knowledge and Sustainability

Publication Date

June 11, 2019

DOI

10.5383/jttm.01.01.003

ISSN

2371-5782
View published work (Non-McMaster Users)

Contact the Experts team