Risk-Based Semi-Supervised Discriminative Language Modeling for Broadcast Transcription

Akio KOBAYASHI  Takahiro OKU  Toru IMAI  Seiichi NAKAGAWA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E95-D   No.11   pp.2674-2681
Publication Date: 2012/11/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.E95.D.2674
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Speech and Hearing
Keyword: 
discriminative training,  semi-supervised training,  language modeling,  Bayes risk minimization,  

Full Text: PDF>>
Buy this Article




Summary: 
This paper describes a new method for semi-supervised discriminative language modeling, which is designed to improve the robustness of a discriminative language model (LM) obtained from manually transcribed (labeled) data. The discriminative LM is implemented as a log-linear model, which employs a set of linguistic features derived from word or phoneme sequences. The proposed semi-supervised discriminative modeling is formulated as a multi-objective optimization programming problem (MOP), which consists of two objective functions defined on both labeled lattices and automatic speech recognition (ASR) lattices as unlabeled data. The objectives are coherently designed based on the expected risks that reflect information about word errors for the training data. The model is trained in a discriminative manner and acquired as a solution to the MOP problem. In transcribing Japanese broadcast programs, the proposed method reduced relatively a word error rate by 6.3% compared with that achieved by a conventional trigram LM.