Evaluating Reinforcement Learning State Representations for Adaptive Traffic Signal Control Journal Articles uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • Reinforcement learning has shown potential for developing effective adaptive traffic signal controllers to reduce traffic congestion and improve mobility. Despite many successful research studies, few of these ideas have been implemented in practice. There remains uncertainty about what the requirements are in terms of data and sensors to actualize reinforcement learning traffic signal control. We seek to understand the data requirements and the performance differences in different state representations for reinforcement learning traffic signal control. We model three state representations, from low to high-resolution, and compare their performance using the asynchronous advantage actor-critic and distributional Qlearning algorithms with neural network function approximation in simulation. Results show that low-resolution state representations (e.g., occupancy and average speed) perform almost identically to high-resolution state representations (e.g., individual vehicle position and speed) using fully connected neural networks, but deep neural networks with high-resolution state representation achieve the best performance. These results indicate implementing reinforcement learning traffic signal controllers in practice can be accomplished with a variety of sensors (e.g., loop detectors, cameras, radar).

publication date

  • June 11, 2019