Abstract
This article studies a reinforcement learning (RL) approach for beam tracking problems in millimeter-wave massive multiple-input multiple-output (MIMO) systems. Entire beam sweeping in traditional beam training problems is intractable due to prohibitive search overheads. To solve this issue, a partially observable Markov decision process (POMDP) formulation can be applied where decisions are made with partial beam sweeping. However, the POMDP cannot be straightforwardly addressed by existing RL approaches which are intended for fully observable environments. In this paper, we propose a deep recurrent Q-learning (DRQN) method which provides an efficient beam decision policy only with partial observations. Numerical results validate the superiority of the proposed method over conventional schemes.
Original language | English |
---|---|
Pages (from-to) | 13429-13434 |
Number of pages | 6 |
Journal | IEEE Transactions on Vehicular Technology |
Volume | 71 |
Issue number | 12 |
DOIs | |
Publication status | Published - 2022 Dec 1 |
Bibliographical note
Publisher Copyright:© 1967-2012 IEEE.
Keywords
- Beam tracking
- deep reinforcement learning
- millimeter-wave communication
ASJC Scopus subject areas
- Aerospace Engineering
- Electrical and Electronic Engineering
- Computer Networks and Communications
- Automotive Engineering