Home
Scholarly Works
A Reinforcement Learning Framework for Efficient...
Journal article

A Reinforcement Learning Framework for Efficient Informative Sensing

Abstract

Large-scale spatial data can be collected using mobile robots with sensing and navigation capabilities. Due to limited battery lifetime and scarcity of charging stations, it is important to plan informative paths so as to maximize the utility of data given a limited travel budget, which is known as the informative path planning (IPP) problem. IPP is NP-hard, and existing solutions suffer from high complexity or low optimality. In this paper, we present a novel IPP solution based on reinforcement learning (RL). The basic idea is to learn the structural characteristics of informative paths, so informative paths can be predicted. As such, when budgets change, we avoid solving the problem from scratch and thus path planning efficiency can be improved dramatically. Among the 20 path planning experiments in two areas, the proposed RL based solution achieves the best path utility in 15 experiments, compared with state-of-the-art algorithms. More importantly, the inference complexity is linear with respect to the budget (equivalently, the maximum number of steps in RL), which is lower than other solutions. Despite the NP-hardness, the path planning process can be finished within a few seconds in our experiments on two graphs of different sizes.

Authors

Wei Y; Zheng R

Journal

IEEE Transactions on Mobile Computing, Vol. 21, No. 7, pp. 2306–2317

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

July 1, 2022

DOI

10.1109/tmc.2020.3040945

ISSN

1536-1233
View published work (Non-McMaster Users)

Contact the Experts team