Tone Recognition of Continuous Mandarin Speech Based on Tone Nucleus Model and Neural Network

Xiao-Dong WANG  Keikichi HIROSE  Jin-Song ZHANG  Nobuaki MINEMATSU  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E91-D   No.6   pp.1748-1755
Publication Date: 2008/06/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e91-d.6.1748
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Pattern Recognition
Keyword: 
Mandarin speech,  tone recognition,  tone nucleus model,  multi-layer perceptron,  

Full Text: PDF(279.5KB)>>
Buy this Article




Summary: 
A method was developed for automatic recognition of syllable tone types in continuous speech of Mandarin by integrating two techniques, tone nucleus modeling and neural network classifier. The tone nucleus modeling considers a syllable F0 contour as consisting of three parts: onset course, tone nucleus, and offset course. Two courses are transitions from/to neighboring syllable F0 contours, while the tone nucleus is intrinsic part of the F0 contour. By viewing only the tone nucleus, acoustic features less affected by neighboring syllables are obtained. When using the tone nucleus modeling, automatic detection of tone nucleus comes crucial. An improvement was added to the original detection method. Distinctive acoustic features for tone types are not limited to F0 contours. Other prosodic features, such as waveform power and syllable duration, are also useful for tone recognition. Their heterogeneous features are rather difficult to be handled simultaneously in hidden Markov models (HMM), but are easy in neural networks. We adopted multi-layer perceptron (MLP) as a neural network. Tone recognition experiments were conducted for speaker dependent and independent cases. In order to show the effect of integration, experiments were conducted also for two baselines: HMM classifier with tone nucleus modeling, and MLP classifier viewing entire syllable instead of tone nucleus. The integrated method showed 87.1% of tone recognition rate in speaker dependent case, and 80.9% in speaker independent case, which was about 10% relative error reduction as compared to the baselines.