HMM-Based Mask Estimation for a Speech Recognition Front-End Using Computational Auditory Scene Analysis

Ji Hun PARK  Jae Sam YOON  Hong Kook KIM  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E91-D   No.9   pp.2360-2364
Publication Date: 2008/09/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e91-d.9.2360
Print ISSN: 0916-8532
Type of Manuscript: LETTER
Category: Speech and Hearing
Keyword: 
computational auditory scene analysis,  mask estimation,  hidden Markov model,  speech recognition,  

Full Text: PDF(344.1KB)
>>Buy this Article


Summary: 
In this paper, we propose a new mask estimation method for the computational auditory scene analysis (CASA) of speech using two microphones. The proposed method is based on a hidden Markov model (HMM) in order to incorporate an observation that the mask information should be correlated over contiguous analysis frames. In other words, HMM is used to estimate the mask information represented as the interaural time difference (ITD) and the interaural level difference (ILD) of two channel signals, and the estimated mask information is finally employed in the separation of desired speech from noisy speech. To show the effectiveness of the proposed mask estimation, we then compare the performance of the proposed method with that of a Gaussian kernel-based estimation method in terms of the performance of speech recognition. As a result, the proposed HMM-based mask estimation method provided an average word error rate reduction of 61.4% when compared with the Gaussian kernel-based mask estimation method.