TY - JOUR
T1 - Energy Efficient AP Selection for Cell-Free Massive MIMO Systems
T2 - Deep Reinforcement Learning Approach
AU - Ghiasi, Niyousha
AU - Mashhadi, Shima
AU - Farahmand, Shahrokh
AU - Razavizadeh, S. Mohammad
AU - Lee, Inkyu
N1 - Funding Information:
This work was supported by the National Research Foundation of Korea (NRF) Grant Funded by the Korea Government (MSIT) under Grant 2022R1A5A1027646.
Publisher Copyright:
© 2023 IEEE.
PY - 2023/3/1
Y1 - 2023/3/1
N2 - The problem of access point (AP) to device association in a cell-free massive multiple-input multiple-output (MIMO) system is investigated. Utilizing energy efficiency (EE) as our main metric, we determine the optimal association parameters subject to minimum rate constraints for all devices. We incorporate all existing practical concerns in our formulation, including training errors, pilot contamination, and central processing unit access to only statistical channel state information (CSI). This EE maximization problem is highly non-convex and possibly NP-hard. We propose to solve this challenging problem by model-free deep reinforcement learning (DRL) methods. Due to the very large discrete action space of our posed optimization problem, existing DRL approaches can not be directly applied. Thus, we approximate the large discrete action space with either a continuous set or a smaller discrete set, and modify existing DRL methods accordingly. Our novel approximations offer a framework with tolerable complexity and satisfactory performance that can be readily applied to other challenging optimization problems in wireless communication. Simulation results corroborate the superior performance of the modified DRL methods over conventional approaches.
AB - The problem of access point (AP) to device association in a cell-free massive multiple-input multiple-output (MIMO) system is investigated. Utilizing energy efficiency (EE) as our main metric, we determine the optimal association parameters subject to minimum rate constraints for all devices. We incorporate all existing practical concerns in our formulation, including training errors, pilot contamination, and central processing unit access to only statistical channel state information (CSI). This EE maximization problem is highly non-convex and possibly NP-hard. We propose to solve this challenging problem by model-free deep reinforcement learning (DRL) methods. Due to the very large discrete action space of our posed optimization problem, existing DRL approaches can not be directly applied. Thus, we approximate the large discrete action space with either a continuous set or a smaller discrete set, and modify existing DRL methods accordingly. Our novel approximations offer a framework with tolerable complexity and satisfactory performance that can be readily applied to other challenging optimization problems in wireless communication. Simulation results corroborate the superior performance of the modified DRL methods over conventional approaches.
KW - Deep reinforcement learning
KW - cell-free massive MIMO
KW - energy efficiency
KW - imperfect CSI
KW - pilot contamination
UR - http://www.scopus.com/inward/record.url?scp=85135763031&partnerID=8YFLogxK
U2 - 10.1109/TGCN.2022.3196013
DO - 10.1109/TGCN.2022.3196013
M3 - Article
AN - SCOPUS:85135763031
SN - 2473-2400
VL - 7
SP - 29
EP - 41
JO - IEEE Transactions on Green Communications and Networking
JF - IEEE Transactions on Green Communications and Networking
IS - 1
ER -