Efficient Mobile Computation Offloading with Hard Task Deadlines and Concurrent Local Execution Conferences uri icon

  •  
  • Overview
  •  
  • Research
  •  
  • Identity
  •  
  • Additional Document Info
  •  
  • View All
  •  

abstract

  • Mobile computation offloading (MCO) can alleviate the hardware limitations of mobile devices by migrating heavy computational tasks from mobile devices to more powerful cloud servers. This can lead to better performance and energy savings for the mobile devices. This thesis considers MCO over stochastic wireless channels when task completion times are subject to hard deadline constraints. Hard deadlines, however, are difficult to meet in conventional computation offloading due to the randomness caused by the wireless channels. In the proposed offloading policies, concurrent local execution (CLE) is used to guarantee task execution time constraints. By sometimes allowing simultaneous local and remote execution, CLE ensures that job deadlines are always satisfied in the face of any unexpected wireless channel conditions. The thesis introduces online optimal algorithms that reduce the remote and local execution overlap so that energy wastage is minimized. Markov processes are used to model the communication channels. MCO is addressed for three different job offloading schemes: continuous, multi-part, and preemptive. In the case of continuous offloading, referred to as 1-Part offloading, the mobile device will upload the entire job in one piece without interruption, when the scheduler decides to do so. In multi-part computation offloading, the job is partitioned into a known number (K) of parts, and each part is uploaded separately. In this offloading mechanism, which is referred to as K-Part Offloading, the upload initiation times of each part must be determined dynamically during runtime, and there may be waiting time periods between consecutive upload parts. Preemptive offloading is a generalization of K-Part Offloading where the number of task upload parts is unknown. In this scheme, a decision to either continue offloading or to temporarily interrupt the offload is made at the start of each time slot. Compared to the conventional contiguous computation offloading, interrupted offloading mechanisms (i.e., K-Part and preemptive offloading) allow the system to adapt when channel conditions change and therefore may result in lower mobile device energy consumption. This energy reduction will be obtained at the expense of having higher computational complexity. In this thesis, for each offloading scheme, an online computation offloading algorithm is introduced by constructing a time-dilated absorbing Markov chain (TDAMC) and applying dynamic programming (DP). These algorithms are shown to be energy-optimal while ensuring that the hard task deadline constraints are always satisfied. The optimality of these algorithms is proved using Markovian decision process stopping theory. Since the computational complexity of the proposed online algorithms, especially in the case of preemptive offloading, can be significant, three simpler and computationally efficient approximation methods are introduced: Markovian Compression (MC), Time Compression (TC), and Preemption Using Continuous Offloading (Preemption-CO). MC and TC reduce the state space of the offloading Markovian process by using a novel notion of geometric similarity or by running an optimal online offloading algorithm in periodic time steps. In Preemption-CO, while a task is offloaded preemptively, the offloading decision at every time-slot is based on non-preemptive calculations. These methods are used alone or in combination to construct practical offloading algorithms. A variety of results are presented that show the tradeoffs between complexity and mobile energy-saving performance for the different algorithms.

publication date

  • December 2020