ARCLE: THE ABSTRACTION AND REASONING CORPUS LEARNING ENVIRONMENT FOR REINFORCEMENT LEARNING

  • Hosung Lee
  • , Sejin Kim
  • , Seungpil Lee
  • , Sanha Hwang
  • , Jihwan Lee
  • , Byung Jun Lee*
  • , Sundong Kim*
  • *Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper introduces ARCLE, an environment designed to facilitate reinforcement learning research on the Abstraction and Reasoning Corpus (ARC). Addressing this inductive reasoning benchmark with reinforcement learning presents these challenges: a vast action space, a hard-to-reach goal, and a variety of tasks. We demonstrate that an agent with proximal policy optimization can learn individual tasks through ARCLE. The adoption of non-factorial policies and auxiliary losses led to performance enhancements, effectively mitigating issues associated with action spaces and goal attainment. Based on these insights, we propose several research directions and motivations for using ARCLE, including MAML, GFlowNets, and World Models.

Original languageEnglish
Pages (from-to)710-731
Number of pages22
JournalProceedings of Machine Learning Research
Volume274
Publication statusPublished - 2024
Event3rd Conference on Lifelong Learning Agents, CoLLAs 2024 - Pisa, Italy
Duration: 2024 Jul 292024 Aug 1

Bibliographical note

Publisher Copyright:
© 2024 CoLLAs. All Rights Reserved.

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'ARCLE: THE ABSTRACTION AND REASONING CORPUS LEARNING ENVIRONMENT FOR REINFORCEMENT LEARNING'. Together they form a unique fingerprint.

Cite this