|
For Full-Text PDF, please login, if you are a member of IEICE,
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
|
Neural Machine Translation with Target-Attention Model
Mingming YANG Min ZHANG Kehai CHEN Rui WANG Tiejun ZHAO
Publication
IEICE TRANSACTIONS on Information and Systems
Vol.E103-D
No.3
pp.684-694 Publication Date: 2020/03/01 Publicized: 2019/11/26 Online ISSN: 1745-1361
DOI: 10.1587/transinf.2019EDP7157 Type of Manuscript: PAPER Category: Natural Language Processing Keyword: attention mechanism, neural machine translation, forward target-attention model, reverse target-attention model, bidirectional target-attention model,
Full Text: PDF>>
Summary:
Attention mechanism, which selectively focuses on source-side information to learn a context vector for generating target words, has been shown to be an effective method for neural machine translation (NMT). In fact, generating target words depends on not only the source-side information but also the target-side information. Although the vanilla NMT can acquire target-side information implicitly by recurrent neural networks (RNN), RNN cannot adequately capture the global relationship between target-side words. To solve this problem, this paper proposes a novel target-attention approach to capture this information, thus enhancing target word predictions in NMT. Specifically, we propose three variants of target-attention model to directly obtain the global relationship among target words: 1) a forward target-attention model that uses a target attention mechanism to incorporate previous historical target words into the prediction of the current target word; 2) a reverse target-attention model that adopts a reverse RNN model to obtain the entire reverse target words information, and then to combine with source context information to generate target sequence; 3) a bidirectional target-attention model that combines the forward target-attention model and reverse target-attention model together, which can make full use of target words to further improve the performance of NMT. Our methods can be integrated into both RNN based NMT and self-attention based NMT, and help NMT get global target-side information to improve translation performance. Experiments on the NIST Chinese-to-English and the WMT English-to-German translation tasks show that the proposed models achieve significant improvements over state-of-the-art baselines.
|
|
|