Sparse Markov Decision Processes with Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning

Kyungjae Lee, Sungjoon Choi, Songhwai Oh

Research output: Contribution to journalArticlepeer-review

30 Citations (Scopus)


In this letter, a sparse Markov decision process (MDP) with novel causal sparse Tsallis entropy regularization is proposed. The proposed policy regularization induces a sparse and multimodal optimal policy distribution of a sparse MDP. The full mathematical analysis of the proposed sparse MDP is provided. We first analyze the optimality condition of a sparse MDP. Then, we propose a sparse value iteration method that solves a sparse MDP and then prove the convergence and optimality of sparse value iteration using the Banach fixed-point theorem. The proposed sparse MDP is compared to soft MDPs that utilize causal entropy regularization. We show that the performance error of a sparse MDP has a constant bound, while the error of a soft MDP increases logarithmically with respect to the number of actions, where this performance error is caused by the introduced regularization term. In experiments, we apply sparse MDPs to reinforcement learning problems. The proposed method outperforms existing methods in terms of the convergence speed and performance.

Original languageEnglish
Pages (from-to)1466-1473
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number3
Publication statusPublished - 2018 Jul
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2016 IEEE.


  • Autonomous agents
  • deep learning in robotics and automation
  • learning and adaptive systems

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Biomedical Engineering
  • Human-Computer Interaction
  • Mechanical Engineering
  • Computer Vision and Pattern Recognition
  • Computer Science Applications
  • Control and Optimization
  • Artificial Intelligence


Dive into the research topics of 'Sparse Markov Decision Processes with Causal Sparse Tsallis Entropy Regularization for Reinforcement Learning'. Together they form a unique fingerprint.

Cite this