Abstract
We propose a deep neural network-based least distance estimator (DNN-LD) for multivariate regression, offering enhanced flexibility and robustness over conventional approaches. Leveraging the properties of the DNNs architecture, the method effectively models linear and nonlinear conditional mean functions and accommodates multivariate responses by expanding the output layer nodes. Compared to least squares, DNN-LD more efficiently captures interdependencies among responses and demonstrates increased robustness to outliers. To enable variable selection in high-dimensional settings, we introduce the (A)GDNN-LD estimator, which incorporates (adaptive) group Lasso penalties into the DNN framework, allowing simultaneous model estimation and feature selection. For the computation, we propose a quadratic smoothing approximation method to facilitate optimizing the non-smooth objective function based on the least distance loss. The simulation studies and a real data analysis demonstrate the promising performance of the proposed method.
| Original language | English |
|---|---|
| Pages (from-to) | 2308-2325 |
| Number of pages | 18 |
| Journal | Journal of Statistical Computation and Simulation |
| Volume | 95 |
| Issue number | 10 |
| DOIs | |
| Publication status | Published - 2025 |
Bibliographical note
Publisher Copyright:© 2025 Informa UK Limited, trading as Taylor & Francis Group.
Keywords
- Multivariate nonlinear regression
- adaptive group Lasso
- deep neural networks
- least distance
- variable selection
ASJC Scopus subject areas
- Statistics and Probability
- Modelling and Simulation
- General Business,Management and Accounting
- Statistics, Probability and Uncertainty
- Applied Mathematics