Bayesian Word Alignment and Phrase Table Training for Statistical Machine Translation

Zezhong LI  Hideto IKEDA  Junichi FUKUMOTO  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E96-D   No.7   pp.1536-1543
Publication Date: 2013/07/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.E96.D.1536
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Natural Language Processing
Keyword: 
Bayesian inference,  word alignment,  phrase extraction,  reordering,  statistical machine translation,  

Full Text: PDF(369.6KB)>>
Buy this Article




Summary: 
In most phrase-based statistical machine translation (SMT) systems, the translation model relies on word alignment, which serves as a constraint for the subsequent building of a phrase table. Word alignment is usually inferred by GIZA++, which implements all the IBM models and HMM model in the framework of Expectation Maximum (EM). In this paper, we present a fully Bayesian inference for word alignment. Different from the EM approach, the Bayesian inference makes use of all possible parameter values rather than estimating a single parameter value, from which we expect a more robust inference. After inferring the word alignment, current SMT systems usually train the phrase table from Viterbi word alignment, which is prone to learn incorrect phrases due to the word alignment mistakes. To overcome this drawback, a new phrase extraction method is proposed based on multiple Gibbs samples from Bayesian inference for word alignment. Empirical results show promising improvements over baselines in alignment quality as well as the translation performance.