Abstract
Split computing has emerged as a promising approach to alleviate the resource constraints of IoT devices by offloading computation to edge servers. However, conventional split computing schemes fail to effectively support multi-task learning (MTL) models, which feature a shared backbone and multiple task-specific branches. These structural characteristics, combined with the variability of network conditions, require a more flexible and adaptive offloading strategy. In this paper, we propose a deep reinforcement learning (DRL)-based dynamic split computing framework (D2SCF) tailored for the MTL model. In D2SCF, the MTL model is first split into a head model of the shared layers and several tail models for the task-specific layers (i.e., individual output branches). Subsequently, an IoT device makes decisions regarding the split computing of the head and tail models. To minimize the average task completion time across multiple tasks and the energy consumption of the IoT device, we formulate a Markov decision process problem. The formulated problem is solved using a model-free DRL algorithm (i.e., a deep deterministic policy gradient). Evaluation results demonstrate that D2SCF reduces the average task completion time by more than 50% compared to conventional split computing schemes, while maintaining low energy consumption on the IoT device. Moreover, it consistently outperforms baseline methods across heterogeneous network settings, confirming its robustness in dynamic environments.
| Original language | English |
|---|---|
| Pages (from-to) | 103439-103450 |
| Number of pages | 12 |
| Journal | IEEE Access |
| Volume | 13 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- Split computing
- deep deterministic policy gradient
- inference
- multi-task learning
- reinforcement learning
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering