For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
A Joint Neural Model for Fine-Grained Named Entity Classification of Wikipedia Articles
Masatoshi SUZUKI Koji MATSUDA Satoshi SEKINE Naoaki OKAZAKI Kentaro INUI
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2018/01/01
Online ISSN: 1745-1361
Type of Manuscript: Special Section PAPER (Special Section on Semantic Web and Linked Data)
named entity classification, wikipedia, multi-task learning, neural network,
Full Text: PDF(985.3KB)>>
This paper addresses the task of assigning labels of fine-grained named entity (NE) types to Wikipedia articles. Information of NE types are useful when extracting knowledge of NEs from natural language text. It is common to apply an approach based on supervised machine learning to named entity classification. However, in a setting of classifying into fine-grained types, one big challenge is how to alleviate the data sparseness problem since one may obtain far fewer instances for each fine-grained types. To address this problem, we propose two methods. First, we introduce a multi-task learning framework, in which NE type classifiers are all jointly trained with a neural network. The neural network has a hidden layer, where we expect that effective combinations of input features are learned across different NE types. Second, we propose to extend the input feature set by exploiting the hyperlink structure of Wikipedia. While most of previous studies are focusing on engineering features from the articles' contents, we observe that the information of the contexts the article is mentioned can also be a useful clue for NE type classification. Concretely, we propose to learn article vectors (i.e. entity embeddings) from Wikipedia's hyperlink structure using a Skip-gram model. Then we incorporate the learned article vectors into the input feature set for NE type classification. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled articles. With the dataset, we empirically show that both of our ideas gained their own statistically significant improvement separately in classification accuracy. Moreover, we show that our proposed methods are particularly effective in labeling infrequent NE types. We've made the learned article vectors publicly available. The labeled dataset is available if one contacts the authors.