Speaker-Independent Speech Emotion Recognition Based on Two-Layer Multiple Kernel Learning

Yun JIN  Peng SONG  Wenming ZHENG  Li ZHAO  Minghai XIN  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E96-D   No.10   pp.2286-2289
Publication Date: 2013/10/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.E96.D.2286
Print ISSN: 0916-8532
Type of Manuscript: LETTER
Category: Speech and Hearing
Keyword: 
emotion speech recognition,  multiple kernel learning,  feature selection,  speaker-independent,  

Full Text: PDF(224.4KB)>>
Buy this Article




Summary: 
In this paper, a two-layer Multiple Kernel Learning (MKL) scheme for speaker-independent speech emotion recognition is presented. In the first layer, MKL is used for feature selection. The training samples are separated into n groups according to some rules. All groups are used for feature selection to obtain n sparse feature subsets. The intersection and the union of all feature subsets are the result of our feature selection methods. In the second layer, MKL is used again for speech emotion classification with the selected features. In order to evaluate the effectiveness of our proposed two-layer MKL scheme, we compare it with state-of-the-art results. It is shown that our scheme results in large gain in performance. Furthermore, another experiment is carried out to compare our feature selection method with other popular ones. And the result proves the effectiveness of our feature selection method.