Sample-efficient multi-agent reinforcement learning with masked reconstruction

Jung In Kim, Young Jae Lee, Jongkook Heo, Jinhyeok Park, Jaehoon Kim, Sae Rin Lim, Jinyong Jeong, Seoung Bum Kim

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Deep reinforcement learning (DRL) is a powerful approach that combines reinforcement learning (RL) and deep learning to address complex decision-making problems in high-dimensional environments. Although DRL has been remarkably successful, its low sample efficiency necessitates extensive training times and large amounts of data to learn optimal policies. These limitations are more pronounced in the context of multi-agent reinforcement learning (MARL). To address these limitations, various studies have been conducted to improve DRL. In this study, we propose an approach that combines a masked reconstruction task with QMIX (M-QMIX). By introducing a masked reconstruction task as an auxiliary task, we aim to achieve enhanced sample efficiency—a fundamental limitation of RL in multi-agent systems. Experiments were conducted using the StarCraft II micromanagement benchmark to validate the effectiveness of the proposed method. We used 11 scenarios comprising five easy, three hard, and three very hard scenarios. We particularly focused on using a limited number of time steps for each scenario to demonstrate the improved sample efficiency. Compared to QMIX, the proposed method is superior in eight of the 11 scenarios. These results provide strong evidence that the proposed method is more sample-efficient than QMIX, demonstrating that it effectively addresses the limitations of DRL in multi-agent systems.

    Original languageEnglish
    Article numbere0291545
    JournalPloS one
    Volume18
    Issue number9 September
    DOIs
    Publication statusPublished - 2023 Sept

    Bibliographical note

    Publisher Copyright:
    Copyright: © 2023 Kim et al.

    ASJC Scopus subject areas

    • General

    Fingerprint

    Dive into the research topics of 'Sample-efficient multi-agent reinforcement learning with masked reconstruction'. Together they form a unique fingerprint.

    Cite this