Investigation of Combining Various Major Language Model Technologies including Data Expansion and Adaptation

Ryo MASUMURA  Taichi ASAMI  Takanobu OBA  Hirokazu MASATAKI  Sumitaka SAKAUCHI  Akinori ITO  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E99-D   No.10   pp.2452-2461
Publication Date: 2016/10/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2016SLP0013
Type of Manuscript: Special Section PAPER (Special Section on Recent Advances in Machine Learning for Spoken Language Processing)
Category: Language modeling
Keyword: 
language models,  direct decoding,  unsupervised adaptation,  rescoring,  spontaneous speech recognition,  

Full Text: PDF(333.7KB)
>>Buy this Article


Summary: 
This paper aims to investigate the performance improvements made possible by combining various major language model (LM) technologies together and to reveal the interactions between LM technologies in spontaneous automatic speech recognition tasks. While it is clear that recent practical LMs have several problems, isolated use of major LM technologies does not appear to offer sufficient performance. In consideration of this fact, combining various LM technologies has been also examined. However, previous works only focused on modeling technologies with limited text resources, and did not consider other important technologies in practical language modeling, i.e., use of external text resources and unsupervised adaptation. This paper, therefore, employs not only manual transcriptions of target speech recognition tasks but also external text resources. In addition, unsupervised LM adaptation based on multi-pass decoding is also added to the combination. We divide LM technologies into three categories and employ key ones including recurrent neural network LMs or discriminative LMs. Our experiments show the effectiveness of combining various LM technologies in not only in-domain tasks, the subject of our previous work, but also out-of-domain tasks. Furthermore, we also reveal the relationships between the technologies in both tasks.