Abstract
This paper addresses the challenge of multiple users concurrently sharing a single channel in a wireless network, a problem typically managed by Carrier Sense Multiple Access (CSMA) protocols. Traditional CSMA methods, however, often lack robustness to environmental changes due to their reliance on static parameters. To overcome this limitation, we propose the Dynamic-Persistent Carrier Sense Multiple Access (DP-CSMA) method, a dynamic and flexible solution inspired by both non-persistent and p-persistent CSMA protocols. Our method incorporates a deep reinforcement learning (DRL) model that dynamically adjusts the waiting period based on the current state and the decision-making process when the channel is sensed idle. This strategy transcends the limitations of static hyperparameters, such as the probability factor in p-persistent CSMA or the contention window in CSMA/CA, which demand careful tuning relative to the number of users. The DRL model in our system captures the dynamic history of previous states and actions using a Long Short-Term Memory (LSTM) model. It efficiently compresses repetitively taken actions into a skill, thereby ensuring a sufficient amount of information is encoded in the action history. Furthermore, our method generates skill-based policies that can induce variable lengths of waiting time for the agents, efficiently handling action sequences of varying lengths and to optimize channel access. We compare the performance of our method with conventional techniques in terms of throughput and evaluate the effectiveness of each method in utilizing the shared medium.
Original language | English |
---|---|
Pages (from-to) | 178705-178716 |
Number of pages | 12 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2013 IEEE.
Keywords
- Channel sharing
- dynamic channel access
- reinforcement learning
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering