Home
Scholarly Works
Deep Reinforcement Learning Freeway Controller...
Journal article

Deep Reinforcement Learning Freeway Controller Chooses Ramp Metering Over Variable Speed Limits

Abstract

The benefits of controlling a freeway bottleneck using reinforcement-learning-(RL)-based ramp metering (RM) and/or variable speed limit (VSL) controllers are well established. However, in the event of using both RM and VSL to control the freeway, it is not clear how each method benefits the traffic stream in contrast to the other. We argue that, depending on traffic conditions, it may be better to use one and not both, or more importantly, to dynamically switch between the two. Moreover, a learning agent can automate the switch when warranted. In this paper, we offer intensive analysis and performance evaluations for RL as well as regulator-based RM and VSL controllers applied on both a Aimsun simulated hypothetical freeway network from literature and a real-world freeway on-ramp, extracted from Queen Elizabeth Way (QEW) located in Ontario, Canada with different levels of demand. The findings indicate that RM is more effective and beneficial than VSL in heavily congested scenarios as opposed to VSL, which can be beneficial in moderate and low congested scenarios. We also show that RL has the advantage of automatically prioritizing one control method over the other depending on traffic conditions. We demonstrate that in heavy congestion scenarios, the RL control agent that manages both RM and VSL clearly chooses RM over VSL.

Authors

ElSamadisy O; Smirnov I; Wang X; Abdulhai B

Journal

Transportation Research Record Journal of the Transportation Research Board, Vol. 2679, No. 8, pp. 194–213

Publisher

SAGE Publications

Publication Date

January 1, 2025

DOI

10.1177/03611981251333340

ISSN

0361-1981

Contact the Experts team