Multiple predicting K-fold cross-validation for model selection

Research output: Contribution to journalArticlepeer-review

173 Citations (Scopus)

Abstract

K-fold cross-validation (CV) is widely adopted as a model selection criterion. In K-fold CV, (K – 1) folds are used for model construction and the hold-out fold is allocated to model validation. This implies model construction is more emphasised than the model validation procedure. However, some studies have revealed that more emphasis on the validation procedure may result in improved model selection. Specifically, leave-m-out CV with n samples may achieve variable-selection consistency when m/n approaches to 1. In this study, a new CV method is proposed within the framework of K-fold CV. The proposed method uses (K – 1) folds of the data for model validation, while the other fold is for model construction. This provides (K – 1) predicted values for each observation. These values are averaged to produce a final predicted value. Then, the model selection based on the averaged predicted values can reduce variation in the assessment due to the averaging. The variable-selection consistency of the suggested method is established. Its advantage over K-fold CV with finite samples are examined under linear, non-linear, and high-dimensional models.

Original languageEnglish
Pages (from-to)197-215
Number of pages19
JournalJournal of Nonparametric Statistics
Volume30
Issue number1
DOIs
Publication statusPublished - 2018 Jan 2

Keywords

  • Cross-validation
  • K-fold cross-validation
  • model selection
  • tuning parameter selection

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'Multiple predicting K-fold cross-validation for model selection'. Together they form a unique fingerprint.

Cite this