Abstract
Distributed serverless edge clouds and split computing are promising technologies to reduce the inference latency of large-scale deep neural networks (DNNs). In this paper, we propose a dynamic split computing framework (DSCF) in distributed serverless edge clouds. In DSCF, the edge cloud orchestrator dynamically determines 1) splitting point and 2) warm status maintenance of container instances (i.e., whether or not to maintain each container instance in a warm status). For optimal decisions, we formulate a constrained Markov decision process (CMDP) problem to minimize the inference latency while maintaining the average resource consumption of distributed edge clouds below a certain level. The optimal stochastic policy can be obtained by converting the CMDP model into a linear programming (LP) model. The evaluation results demonstrate that DSCF can achieve less than half the inference latency compared to the local computing scheme while maintaining sufficient low resource consumption of distributed edge clouds.
| Original language | English |
|---|---|
| Pages (from-to) | 1 |
| Number of pages | 1 |
| Journal | IEEE Internet of Things Journal |
| DOIs | |
| Publication status | Accepted/In press - 2023 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Artificial neural networks
- Cloud computing
- Computational modeling
- Containers
- Mobile handsets
- Predictive models
- Split computing
- Tail
- distributed serverless edge cloud
- joint optimization
- warm start
ASJC Scopus subject areas
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications