Bayesian Learning of a Language Model from Continuous Speech

Graham NEUBIG  Masato MIMURA  Shinsuke MORI  Tatsuya KAWAHARA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E95-D   No.2   pp.614-625
Publication Date: 2012/02/01
Online ISSN: 1745-1361
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Speech and Hearing
Keyword: 
language modeling,  automatic speech recognition,  Bayesian learning,  weighted finite state transducers,  

Full Text: PDF(483.1KB)
>>Buy this Article


Summary: 
We propose a novel scheme to learn a language model (LM) for automatic speech recognition (ASR) directly from continuous speech. In the proposed method, we first generate phoneme lattices using an acoustic model with no linguistic constraints, then perform training over these phoneme lattices, simultaneously learning both lexical units and an LM. As a statistical framework for this learning problem, we use non-parametric Bayesian statistics, which make it possible to balance the learned model's complexity (such as the size of the learned vocabulary) and expressive power, and provide a principled learning algorithm through the use of Gibbs sampling. Implementation is performed using weighted finite state transducers (WFSTs), which allow for the simple handling of lattice input. Experimental results on natural, adult-directed speech demonstrate that LMs built using only continuous speech are able to significantly reduce ASR phoneme error rates. The proposed technique of joint Bayesian learning of lexical units and an LM over lattices is shown to significantly contribute to this improvement.