TY - GEN
T1 - On Performance VNF Load Prediction Models in Service Function Chaining
AU - Cho, Yunyoung
AU - Jang, Seokwon
AU - Pack, Sangheon
N1 - Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/10/21
Y1 - 2020/10/21
N2 - One of the fundamental challenges in network function virtualization (NFV) is how to autonomously and dynamically allocate resources to virtualized network functions (VNFs) whose resource requirements frequently changed. To address this issue, several machine learning (ML)-based studies have been proposed, which predict the future loads of VNF in service function chaining (SFC) enabled networks. In this paper, we compare two prediction models: 1) basic long short term memory (LSTM) model that uses historical resource utilization of VNF to predict and 2) context and aspect embedded attentive target dependent LSTM (CAT-LSTM) model that leverages both historical utilization of VNF and its neighbor VNFs. Simulation results demonstrate that performance of basic LSTM is not affected by the number of service chain (SC) allocated in VNF. Meanwhile, the performance of CAT-LSTM is apparently degraded when multiple SCs are allocated in VNFs, which means prediction loss of CAT-LSTM increases from 4% to 6%.
AB - One of the fundamental challenges in network function virtualization (NFV) is how to autonomously and dynamically allocate resources to virtualized network functions (VNFs) whose resource requirements frequently changed. To address this issue, several machine learning (ML)-based studies have been proposed, which predict the future loads of VNF in service function chaining (SFC) enabled networks. In this paper, we compare two prediction models: 1) basic long short term memory (LSTM) model that uses historical resource utilization of VNF to predict and 2) context and aspect embedded attentive target dependent LSTM (CAT-LSTM) model that leverages both historical utilization of VNF and its neighbor VNFs. Simulation results demonstrate that performance of basic LSTM is not affected by the number of service chain (SC) allocated in VNF. Meanwhile, the performance of CAT-LSTM is apparently degraded when multiple SCs are allocated in VNFs, which means prediction loss of CAT-LSTM increases from 4% to 6%.
KW - load prediction
KW - machine learning
KW - service function chaining
KW - Virtualized network function
UR - http://www.scopus.com/inward/record.url?scp=85098993760&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85098993760&partnerID=8YFLogxK
U2 - 10.1109/ICTC49870.2020.9289275
DO - 10.1109/ICTC49870.2020.9289275
M3 - Conference contribution
AN - SCOPUS:85098993760
T3 - International Conference on ICT Convergence
SP - 344
EP - 346
BT - ICTC 2020 - 11th International Conference on ICT Convergence
PB - IEEE Computer Society
T2 - 11th International Conference on Information and Communication Technology Convergence, ICTC 2020
Y2 - 21 October 2020 through 23 October 2020
ER -