Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: A matrix inequality approach

Choon Ki Ahn, Moon Kyou Song

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

The purpose of this paper is to propose a new peak-to-peak exponential direct learning law (P2PEDLL) for continuous-time dynamic neural network models with disturbance. Dynamic neural network models trained by the proposed P2PEDLL based on matrix inequality formulation are exponentially stable, with a guaranteed exponential peak-to-peak norm performance. The proposed P2PEDLL can be determined by solving two matrix inequalities with a fixed parameter, which can be efficiently checked using existing standard numerical algorithms. We use a numerical example to demonstrate the validity of the proposed direct learning law.

Original languageEnglish
Article number68
JournalJournal of Inequalities and Applications
Volume2013
DOIs
Publication statusPublished - 2013 Dec

Keywords

  • Disturbance
  • Dynamic neural network models
  • Exponential peak-to-peak norm performance
  • Matrix inequality
  • Training law

ASJC Scopus subject areas

  • Analysis
  • Discrete Mathematics and Combinatorics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Peak-to-peak exponential direct learning of continuous-time recurrent neural network models: A matrix inequality approach'. Together they form a unique fingerprint.

Cite this