Discriminative Learning of Filterbank Layer within Deep Neural Network Based Speech Recognition for Speaker Adaptation

Hiroshi SEKI  Kazumasa YAMAMOTO  Tomoyosi AKIBA  Seiichi NAKAGAWA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E102-D   No.2   pp.364-374
Publication Date: 2019/02/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2018EDP7252
Type of Manuscript: PAPER
Category: Speech and Hearing
Keyword: 
speech recognition,  deep neural network,  acoustic model,  speaker adaptation,  filterbank learning,  

Full Text: PDF(1.3MB)>>
Buy this Article




Summary: 
Deep neural networks (DNNs) have achieved significant success in the field of automatic speech recognition. One main advantage of DNNs is automatic feature extraction without human intervention. However, adaptation under limited available data remains a major challenge for DNN-based systems because of their enormous free parameters. In this paper, we propose a filterbank-incorporated DNN that incorporates a filterbank layer that presents the filter shape/center frequency and a DNN-based acoustic model. The filterbank layer and the following networks of the proposed model are trained jointly by exploiting the advantages of the hierarchical feature extraction, while most systems use pre-defined mel-scale filterbank features as input acoustic features to DNNs. Filters in the filterbank layer are parameterized to represent speaker characteristics while minimizing a number of parameters. The optimization of one type of parameters corresponds to the Vocal Tract Length Normalization (VTLN), and another type corresponds to feature-space Maximum Linear Likelihood Regression (fMLLR) and feature-space Discriminative Linear Regression (fDLR). Since the filterbank layer consists of just a few parameters, it is advantageous in adaptation under limited available data. In the experiment, filterbank-incorporated DNNs showed effectiveness in speaker/gender adaptations under limited adaptation data. Experimental results on CSJ task demonstrate that the adaptation of proposed model showed 5.8% word error reduction ratio with 10 utterances against the un-adapted model.