Abstract
The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size). This holds true under the condition that the input covariance matrix in the feature space is approximated by its sample estimate based on the dictionary elements, leading to a couple of fundamental insights into online learning with kernels. First, the eigenvalue spread of the autocorrelation matrix relevant to the hyperplane projection along affine subspace algorithm is approximately a square root of that for the kernel normalized least mean square algorithm. This clarifies the mechanism behind fast convergence due to the use of a Hilbertian metric. Second, for efficient function estimation, the dictionary needs to be constructed in general by taking into account the distribution of the input vector, so as to satisfy the condition. The theoretical results are justified by computer experiments.
Original language | English |
---|---|
Article number | 7536151 |
Pages (from-to) | 1424-1428 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 23 |
Issue number | 10 |
DOIs | |
Publication status | Published - 2016 Oct |
Bibliographical note
Publisher Copyright:© 1994-2012 IEEE.
Keywords
- Kernel adaptive filter
- online learning
- reproducing kernel Hilbert space (RKHS)
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering
- Applied Mathematics