Abstract
Principal component analysis (PCA) is a canonical tool that reduces data dimensionality by finding linear transformations that project the data into a lower dimensional subspace while preserving the variability of the data. Selecting the number of principal components (PC) is essential but challenging for PCA since it represents an unsupervised learning problem without a clear target label at the sample level. In this article, we propose a new method to determine the optimal number of PCs based on the stability of the space spanned by PCs. A series of analyses with both synthetic data and real data demonstrates the superior performance of the proposed method.
Original language | English |
---|---|
Pages (from-to) | 1923-1938 |
Number of pages | 16 |
Journal | Computational Statistics |
Volume | 33 |
Issue number | 4 |
DOIs | |
Publication status | Published - 2018 Dec 1 |
Keywords
- Principal component analysis
- Stability selection
- Structural dimension
- Subsampling
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty
- Computational Mathematics