Abstract
The kernel trick has been a canonical and general tool for learning complicated models from data. In the era of big-data, however, the kernel machine becomes often infeasible due to its prohibitively large computation cost. For example, the kernel support vector machine (KSVM) requires O(n^3) flops to learn the machine from the data whose sample size is n. There are various studies to reduce the computational cost of the kernel machine, and Lan et al. (2019) proposed a lower-rank linearization approach to develop a scalable algorithm for the KSVM. In this article, we extend the idea of Lan et al. (2019) to a variety of kernel machines, such as kernel ridge regression, kernel quantile regression, kernel logistic regression, and kernel support vector regression. Our numerical experiment and real data analysis show that the lower-rank linearization approach greatly reduces the computational cost of various kernel machines while preserving prediction accuracy.
| Original language | English |
|---|---|
| Pages (from-to) | 711-721 |
| Number of pages | 11 |
| Journal | Communications for Statistical Applications and Methods |
| Volume | 32 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - 2025 Nov |
Bibliographical note
Publisher Copyright:© 2025 The Korean Statistical Society, and Korean International Statistical Society. All rights reserved.
Keywords
- Kernel method
- Large-scale learning
- Low-rank linearization
ASJC Scopus subject areas
- Statistics and Probability
- Modelling and Simulation
- Finance
- Statistics, Probability and Uncertainty
- Applied Mathematics
Fingerprint
Dive into the research topics of 'Lower rank linearization for scalable training of kernel machines'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS