Acoustic Event Detection in Speech Overlapping Scenarios Based on High-Resolution Spectral Input and Deep Learning

Miquel ESPI  Masakiyo FUJIMOTO  Tomohiro NAKATANI  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E98-D   No.10   pp.1799-1807
Publication Date: 2015/10/01
Online ISSN: 1745-1361
Type of Manuscript: PAPER
Category: Speech and Hearing
Keyword: 
acoustic event detection/recognition,  high-resolution feature,  spectrogram patch,  communication scene understanding,  

Full Text: PDF(1.5MB)
>>Buy this Article


Summary: 
We present a method for recognition of acoustic events in conversation scenarios where speech usually overlaps with other acoustic events. While speech is usually considered the most informative acoustic event in a conversation scene, it does not always contain all the information. Non-speech events, such as a door knock, steps, or a keyboard typing can reveal aspects of the scene that speakers miss or avoid to mention. Moreover, being able to robustly detect these events could further support speech enhancement and recognition systems by providing useful information cues about the surrounding scenarios and noise. In acoustic event detection, state-of-the-art techniques are typically based on derived features (e.g. MFCC, or Mel-filter-banks) which have successfully parameterized the spectrogram of speech but reduce resolution and detail when we are targeting other kinds of events. In this paper, we propose a method that learns features in an unsupervised manner from high-resolution spectrogram patches (considering a patch as a certain number of consecutive frame features stacked together), and integrates within the deep neural network framework to detect and classify acoustic events. Superiority over both previous works in the field, and similar approaches based on derived features, has been assessed by statical measures and evaluation with CHIL2007 corpus, an annotated database of seminar recordings.