For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
Automatic Generation of Non-uniform and Context-Dependent HMMs Based on the Variational Bayesian Approach
Takatoshi JITSUHIRO Satoshi NAKAMURA
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2005/03/01
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Section on Corpus-Based Speech Technologies)
Category: Feature Extraction and Acoustic Medelings
speech recognition, acoustic model, topology training, SSS algorithm, variational Bayesian approach,
Full Text: PDF(866.7KB)
>>Buy this Article
We propose a new method both for automatically creating non-uniform, context-dependent HMM topologies, and selecting the number of mixture components based on the Variational Bayesian (VB) approach. Although the Maximum Likelihood (ML) criterion is generally used to create HMM topologies, it has an over-fitting problem. Recently, to avoid this problem, the VB approach has been applied to create acoustic models for speech recognition. We introduce the VB approach to the Successive State Splitting (SSS) algorithm, which can create both contextual and temporal variations for HMMs. Experimental results indicate that the proposed method can automatically create a more efficient model than the original method. We evaluated a method to increase the number of mixture components by using the VB approach and considering temporal structures. The VB approach obtained almost the same performance as the smaller number of mixture components in comparison with that obtained by using ML-based methods.