Why Does a Hilbertian Metric Work Efficiently in Online Learning with Kernels?

Masahiro Yukawa, Klaus Robert Muller

    Research output: Contribution to journalArticlepeer-review

    7 Citations (Scopus)

    Abstract

    The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size). This holds true under the condition that the input covariance matrix in the feature space is approximated by its sample estimate based on the dictionary elements, leading to a couple of fundamental insights into online learning with kernels. First, the eigenvalue spread of the autocorrelation matrix relevant to the hyperplane projection along affine subspace algorithm is approximately a square root of that for the kernel normalized least mean square algorithm. This clarifies the mechanism behind fast convergence due to the use of a Hilbertian metric. Second, for efficient function estimation, the dictionary needs to be constructed in general by taking into account the distribution of the input vector, so as to satisfy the condition. The theoretical results are justified by computer experiments.

    Original languageEnglish
    Article number7536151
    Pages (from-to)1424-1428
    Number of pages5
    JournalIEEE Signal Processing Letters
    Volume23
    Issue number10
    DOIs
    Publication statusPublished - 2016 Oct

    Bibliographical note

    Publisher Copyright:
    © 1994-2012 IEEE.

    Keywords

    • Kernel adaptive filter
    • online learning
    • reproducing kernel Hilbert space (RKHS)

    ASJC Scopus subject areas

    • Signal Processing
    • Electrical and Electronic Engineering
    • Applied Mathematics

    Fingerprint

    Dive into the research topics of 'Why Does a Hilbertian Metric Work Efficiently in Online Learning with Kernels?'. Together they form a unique fingerprint.

    Cite this