Abstract
Quantile regression models have become popular due to their benefits in obtaining robust estimates. Some machine learning (ML) models can estimate conditional quantiles. However, current ML methods mainly focus on just adapting quantile regression. In this paper, we propose a local quantile ensemble based on ML methods, which averages multiple estimated quantiles near the target quantile. It is designed to enhance the stability and accuracy of the quantile fits. This approach extends the composite quantile regression algorithm that typically considers the central tendency under a linear model. The proposed methods can be applied to various types of data having nonlinear and heterogeneous trend. We provide an empirical rule for choosing quantiles around the target quantile. The bias-variance tradeoff inherent in this method offers performance benefits. Through empirical studies using Monte Carlo simulations and real data sets, we demonstrate that the proposed method can significantly improve quantile estimation accuracy and stabilize the quantile fits.
Original language | English |
---|---|
Pages (from-to) | 627-644 |
Number of pages | 18 |
Journal | Communications for Statistical Applications and Methods |
Volume | 31 |
Issue number | 6 |
DOIs | |
Publication status | Published - 2024 |
Bibliographical note
Publisher Copyright:© 2024 The Korean Statistical Society, and Korean International Statistical Society. All rights reserved.
Keywords
- ensemble learning
- quantile averaging
- quantile crossing
- tree-based models
- variance reduction
ASJC Scopus subject areas
- Statistics and Probability
- Modelling and Simulation
- Finance
- Statistics, Probability and Uncertainty
- Applied Mathematics