Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach

Ramasamy Saravanakumar, Hyung Soo Kang, Choon Ki Ahn, Xiaojie Su, Hamid Reza Karimi

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R) - α -dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.

Original languageEnglish
Article number8424490
Pages (from-to)913-922
Number of pages10
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number3
DOIs
Publication statusPublished - 2019 Mar

Keywords

  • Dissipativity learning
  • Legendre polynomial
  • neural networks
  • robust stabilization

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Robust Stabilization of Delayed Neural Networks: Dissipativity-Learning Approach'. Together they form a unique fingerprint.

Cite this