Adecision tree can be used not only as a classifier but also as a clustering method. One of such applications can be found in automatic speech recognition using hidden Markov models (HMMs). Due to the insufficient amount of training data, similar states of triphone HMMs are grouped together using a decision tree to share a common probability distribution. At the same time, in order to predict the statistics of unseen triphones, the decision tree is used as a classifier as well. In this paper, we study several cluster split criteria in decision tree building algorithms for the case where the instances to be clustered are probability density functions. Especially, when Gaussian probability distributions are to be clustered, we have found that the Bhattacharyya distance based measures are more consistent than the conventional log likelihood based measure.
|Title of host publication
|Intelligent Data Engineering and Automated Learning - IDEAL 2002 - 3rd International Conference, Proceedings
|Hujun Yin, Nigel Allinson, Richard Freeman, John Keane, Simon Hubbard
|Number of pages
|Published - 2002
|3rd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2002 - Manchester, United Kingdom
Duration: 2002 Aug 12 → 2002 Aug 14
|Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
|3rd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2002
|02/8/12 → 02/8/14
Bibliographical notePublisher Copyright:
© Springer-Verlag Berlin Heidelberg 2002.
Copyright 2020 Elsevier B.V., All rights reserved.
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science