Local quantile ensemble for machine learning methods

Suin Kim, Yoonsuh Jung

Research output: Contribution to journalArticlepeer-review

Abstract

Quantile regression models have become popular due to their benefits in obtaining robust estimates. Some machine learning (ML) models can estimate conditional quantiles. However, current ML methods mainly focus on just adapting quantile regression. In this paper, we propose a local quantile ensemble based on ML methods, which averages multiple estimated quantiles near the target quantile. It is designed to enhance the stability and accuracy of the quantile fits. This approach extends the composite quantile regression algorithm that typically considers the central tendency under a linear model. The proposed methods can be applied to various types of data having nonlinear and heterogeneous trend. We provide an empirical rule for choosing quantiles around the target quantile. The bias-variance tradeoff inherent in this method offers performance benefits. Through empirical studies using Monte Carlo simulations and real data sets, we demonstrate that the proposed method can significantly improve quantile estimation accuracy and stabilize the quantile fits.

Original languageEnglish
Pages (from-to)627-644
Number of pages18
JournalCommunications for Statistical Applications and Methods
Volume31
Issue number6
DOIs
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
© 2024 The Korean Statistical Society, and Korean International Statistical Society. All rights reserved.

Keywords

  • ensemble learning
  • quantile averaging
  • quantile crossing
  • tree-based models
  • variance reduction

ASJC Scopus subject areas

  • Statistics and Probability
  • Modelling and Simulation
  • Finance
  • Statistics, Probability and Uncertainty
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Local quantile ensemble for machine learning methods'. Together they form a unique fingerprint.

Cite this