TY - JOUR
T1 - Robust Stabilization of Delayed Neural Networks
T2 - Dissipativity-Learning Approach
AU - Saravanakumar, Ramasamy
AU - Kang, Hyung Soo
AU - Ahn, Choon Ki
AU - Su, Xiaojie
AU - Karimi, Hamid Reza
N1 - Funding Information:
Manuscript received February 14, 2017; revised October 6, 2017 and February 28, 2018; accepted June 22, 2018. Date of publication August 2, 2018; date of current version February 19, 2019. This work was supported in part by the National Research Foundation of Korea through the Ministry of Science, ICT and Future Planning under Grant NRF-2017R1A1A1A05001325 and in part by the Human Resources Program in Energy Technology of the Korea Institute of Energy Technology Evaluation and Planning granted financial resource from the Ministry of Trade, Industry & Energy, South Korea under Grant 20174030201820. (Corresponding author: Choon Ki Ahn.) R. Saravanakumar is with the Department of Mathematics, Faculty of Science, Mahidol University, Bangkok 10400, Thailand, and also with the Department of Control and Robotics Engineering, Kunsan National University, Gunsan 573-701, South Korea (e-mail: saravanamaths30@gmail.com).
Publisher Copyright:
© 2012 IEEE.
PY - 2019/3
Y1 - 2019/3
N2 - This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R) - α -dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H ∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.
AB - This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R) - α -dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H ∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.
KW - Dissipativity learning
KW - Legendre polynomial
KW - neural networks
KW - robust stabilization
UR - http://www.scopus.com/inward/record.url?scp=85050997468&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2018.2852807
DO - 10.1109/TNNLS.2018.2852807
M3 - Article
C2 - 30072342
AN - SCOPUS:85050997468
SN - 2162-237X
VL - 30
SP - 913
EP - 922
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 3
M1 - 8424490
ER -