Acoustic Modeling of Speaking Styles and Emotional Expressions in HMM-Based Speech Synthesis

Junichi YAMAGISHI   Koji ONISHI   Takashi MASUKO   Takao KOBAYASHI   

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E88-D   No.3   pp.502-509
Publication Date: 2005/03/01
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Section on Corpus-Based Speech Technologies)
Category: Speech Synthesis and Prosody
Keyword: 
HMM-based speech synthesis ,  expressive speech synthesis ,  speaking style ,  emotional expression ,  acoustic modeling ,  decision tree ,  

Full Text: PDF(382.3KB)
>>Buy this Article


Summary: 
This paper describes the modeling of various emotional expressions and speaking styles in synthetic speech using HMM-based speech synthesis. We show two methods for modeling speaking styles and emotional expressions. In the first method called style-dependent modeling, each speaking style and emotional expression is modeled individually. In the second one called style-mixed modeling, each speaking style and emotional expression is treated as one of contexts as well as phonetic, prosodic, and linguistic features, and all speaking styles and emotional expressions are modeled simultaneously by using a single acoustic model. We chose four styles of read speech -- neutral, rough, joyful, and sad -- and compared the above two modeling methods using these styles. The results of subjective evaluation tests show that both modeling methods have almost the same accuracy, and that it is possible to synthesize speech with the speaking style and emotional expression similar to those of the target speech. In a test of classification of styles in synthesized speech, more than 80% of speech samples generated using both the models were judged to be similar to the target styles. We also show that the style-mixed modeling method gives fewer output and duration distributions than the style-dependent modeling method.