Home
Scholarly Works
TacNet: A Tactic-Interactive Resource Allocation...
Journal article

TacNet: A Tactic-Interactive Resource Allocation Method for Vehicular Networks

Abstract

To support safety driving and various on-board services, efficient resource allocation is crucial for the promising implement of vehicle platooning in intelligent transportation systems (ITSs). The resource allocation of vehicle-to-everything (V2X) communications for vehicular platoons is studied in this article. First, a multiobjective function is formulated to jointly optimize sub-band and power allocation to satisfy Quality-of- Service (QoS) in vehicular networks. With the advantage of dealing with complex decision-making problems in multiagent systems, distributed multiagent deep reinforcement learning (MADRL) stands out for resource allocation of vehicular networks. However, it faces the challenge of cooperation aging when every agent is only learning from information of others to form a cooperation model in the training process. Considering the random and dynamic combination of vehicles in vehicle platooning, a tactic-interactive MADRL method named as TacNet is then proposed to improve the cooperation efficiency of multiple agents. In TacNet, the tactics of other agents will be encoded and transmitted through interactive communications among agents. In addition, with the development of vehicular edge computing (VEC), digital twin (DT) networks are constructed to assist offloading computation-intensive resource allocation tasks in vehicles to the edge. The superiority of the proposed method is verified through extensive simulation results, which refers to convergence and performance of satisfying diversified QoS requirements compared with state-of-the-art MADRL methods.

Authors

Fu X; Yuan Q; Zhuang Z; Li Y; Liao J; Zhao D

Journal

IEEE Internet of Things Journal, Vol. 11, No. 8, pp. 14370–14382

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

January 15, 2024

DOI

10.1109/jiot.2023.3345853

ISSN

2327-4662

Contact the Experts team