For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
Unsupervised Speaker Adaptation Using All-Phoneme Ergodic Hidden Markov Network
Yasunage MIYAZAWA Jun-ichi TAKAMI Shigeki SAGAYAMA Shoichi MATSUNAGA
IEICE TRANSACTIONS on Information and Systems
Publication Date: 1995/08/25
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Speech Processing and Acoustics
speech recognition, unsupervised speaker adaptation, all-phoneme ergodic hidden Markov network, context-dependent phoneme bigram,
Full Text: PDF(631.1KB)>>
This paper proposes an unsupervised speaker adaptation method using an all-phoneme ergodic Hidden Markov Network" that combines allophonic (context-dependent phone) acoustic models with stochastic language constraints. Hidden Markov Network (HMnet) for allophone modeling and allophonic bigram probabilities derived from a large text database are combined to yield a single large ergodic HMM which represents arbitrary speech signals in a particular language so that the model parameters can be re-estimated using text-unknown speech samples with the Baum-Welch algorithm. When combined with the Vector Field Smoothing (VFS) technique, unsupervised speaker adaptation can be effectively performed. This method experimentally gave better performances compared with our previous unsupervised adaptation method which used conventional phonetic HMMs and phoneme bigram probabilities especially when the amount of training data was small.