Efficient model selection for probabilistic K nearest neighbour classification

Ji Won Yoon, Nial Friel

Research output: Contribution to journalArticlepeer-review

15 Citations (Scopus)

Abstract

Probabilistic K-nearest neighbour (PKNN) classification has been introduced to improve the performance of the original K-nearest neighbour (KNN) classification algorithm by explicitly modelling uncertainty in the classification of each feature vector. However, an issue common to both KNN and PKNN is to select the optimal number of neighbours, K. The contribution of this paper is to incorporate the uncertainty in K into the decision making, and consequently to provide improved classification with Bayesian model averaging. Indeed the problem of assessing the uncertainty in K can be viewed as one of statistical model selection which is one of the most important technical issues in the statistics and machine learning domain. In this paper, we develop a new functional approximation algorithm to reconstruct the density of the model (order) without relying on time consuming Monte Carlo simulations. In addition, the algorithms avoid cross validation by adopting Bayesian framework. The performance of the proposed approaches is evaluated on several real experimental datasets.

Original languageEnglish
Pages (from-to)1098-1108
Number of pages11
JournalNeurocomputing
Volume149
Issue numberPB
DOIs
Publication statusPublished - 2015 Feb 3

Bibliographical note

Publisher Copyright:
© 2014 Elsevier B.V.

Keywords

  • Bayesian inference
  • K-free model order estimation
  • Model averaging

ASJC Scopus subject areas

  • Computer Science Applications
  • Cognitive Neuroscience
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Efficient model selection for probabilistic K nearest neighbour classification'. Together they form a unique fingerprint.

Cite this