Reinforcement Learning Event-Triggered Control With Flexible Performance Assurance for Stochastic Nonlinear Systems

  • Xiaona Song
  • , Peng Sun
  • , Shuai Song*
  • , Choon Ki Ahn*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This article focuses on an adaptive neural optimal output feedback control strategy for stochastic nonaffine multiple input multiple output nonlinear systems with input saturation. Initially, an emerging local state estimation filter is delicately formulated to identify the unavailable states while economizing the redundant resource usage in the state estimation filter-to-controller channel. Then, the tracking error can be regulated within a flexible envelope range by designing a modified flexible global prescribed time function depending on the intermittent systems at the expense of certain user-prescribed indexes. Technically, an amended nonlinear filter featuring the hyperbolic tangent function is constructed to overcome the curse of dimensionality while compensating for the effect of neglected filter error. Meanwhile, an optimized adaptive event-triggered controller is developed to adjust the triggered threshold online to release networked resources and consumed control expenses employed in the controller-to-actuator channel. Since the trigger criteria of different channels are not correlated with each other, the designer can tune the signal delivery frequency of each channel separately following the practical requirements. The boundedness of all signals is ensured via Itô’s differential equation. Herein, two illustrative analyses verify the efficacy and feasibility of the established control algorithm. Note to Practitioners—The motivation stems from the real demand for optimal output feedback control of stochastic nonlinear systems with saturation nonlinearity. Regrettably, current practices are incapable of attaining the anticipated specified behaviors in the context of actuator saturation. To this end, we ingeniously formulate a unique design procedure, distinguished from the prevailing prescribed-based ones, which exerts the flexible envelope range on the tracking error by incorporating the intermittent systems into the modified flexible global prescribed time function to remedy the bottleneck. Moreover, a local state estimation filter is developed to estimate the unavailable states while reducing the utilization of resource usage in the state estimation filter-to-controller channel. On this basis, a dynamic event-triggered protocol is introduced to economize the networked resources in the controller-to-actuator channel. The control solution is more prone to physical application embodied in the two-stage chemical reactors to confirm its viability.

Original languageEnglish
Pages (from-to)19352-19365
Number of pages14
JournalIEEE Transactions on Automation Science and Engineering
Volume22
DOIs
Publication statusPublished - 2025

Bibliographical note

Publisher Copyright:
© 2004-2012 IEEE.

Keywords

  • Actor-critic neural networks
  • adaptive optimal control
  • dynamic event-triggered mechanism
  • flexible global performance
  • local state estimation filter
  • stochastic nonaffine MIMO nonlinear systems

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Reinforcement Learning Event-Triggered Control With Flexible Performance Assurance for Stochastic Nonlinear Systems'. Together they form a unique fingerprint.

Cite this