Home
Scholarly Works
An agent-based learning towards decentralized and...
Conference

An agent-based learning towards decentralized and coordinated traffic signal control

Abstract

Adaptive traffic signal control is a promising technique for alleviating traffic congestion. Reinforcement Learning (RL) has the potential to tackle the optimal traffic control problem for a single agent. However, the ultimate goal is to develop integrated traffic control for multiple intersections. Integrated traffic control can be efficiently achieved using decentralized controllers. Multi-Agent Reinforcement Learning (MARL) is an extension of RL techniques that makes it possible to decentralize multiple agents in a non-stationary environments. Most of the studies in the field of traffic signal control consider a stationary environment, an approach whose shortcomings are highlighted in this paper. A Q-Learning-based acyclic signal control system that uses a variable phasing sequence is developed. To investigate the appropriate state model for different traffic conditions, three models were developed, each with different state representation. The models were tested on a typical multiphase intersection to minimize the vehicle delay and were compared to the pre-timed control strategy as a benchmark. The Q-Learning control system consistently outperformed the widely used Webster pre-timed optimized signal control strategy under various traffic conditions.

Authors

El-Tantawy S; Abdulhai B

Pagination

pp. 665-670

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

September 1, 2010

DOI

10.1109/itsc.2010.5625066

Name of conference

13th International IEEE Conference on Intelligent Transportation Systems

Conference proceedings

17th International IEEE Conference on Intelligent Transportation Systems (ITSC)

ISSN

2153-0009
View published work (Non-McMaster Users)

Contact the Experts team