Home
Scholarly Works
Online Policy Learning for Opportunistic Mobile...
Conference

Online Policy Learning for Opportunistic Mobile Computation Offloading

Abstract

This work considers opportunistic mobile computation offloading between a requestor and a helper. The requestor device may offload some of its computation-intensive tasks to the helper device. The availability of the helper, however, is random. The objective of this work is to find the optimum offloading decisions for the requestor to minimize its energy consumption, subject to a mean delay constraint of the tasks. The problem is formulated as a constrained Markov decision process by taking into consideration the random task arrivals, availability of the helper, and time-varying channel conditions. Optimal offline solution is first obtained through linear programming. An online algorithm is then designed to learn the optimum offloading policy by introducing post-decision states into the problem. Simulation results demonstrate that the proposed online algorithm achieves close-to-optimum performance with much lower complexity.

Authors

Mu S; Zhong Z; Zhao D

Volume

00

Pagination

pp. 1-6

Publisher

Institute of Electrical and Electronics Engineers (IEEE)

Publication Date

December 11, 2020

DOI

10.1109/globecom42002.2020.9322467

Name of conference

GLOBECOM 2020 - 2020 IEEE Global Communications Conference
View published work (Non-McMaster Users)

Contact the Experts team