Abstract
This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R) - α -dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H ∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.
Original language | English |
---|---|
Article number | 8424490 |
Pages (from-to) | 913-922 |
Number of pages | 10 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 30 |
Issue number | 3 |
DOIs | |
Publication status | Published - 2019 Mar |
Bibliographical note
Publisher Copyright:© 2012 IEEE.
Keywords
- Dissipativity learning
- Legendre polynomial
- neural networks
- robust stabilization
ASJC Scopus subject areas
- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence