Statistical-Based Approach to Non-segmented Language Processing


IEICE TRANSACTIONS on Information and Systems   Vol.E90-D   No.10   pp.1565-1573
Publication Date: 2007/10/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e90-d.10.1565
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Section on Knowledge, Information and Creativity Support System)
non-segmented language,  unified language processing,  statistical approach,  probability,  language identification,  word extraction,  search engine,  

Full Text: PDF(910.3KB)>>
Buy this Article

Several approaches have been studied to cope with the exceptional features of non-segmented languages. When there is no explicit information about the boundary of a word, segmenting an input text is a formidable task in language processing. Not only the contemporary word list, but also usages of the words have to be maintained to cover the use in the current texts. The accuracy and efficiency in higher processing do heavily rely on this word boundary identification task. In this paper, we introduce some statistical based approaches to tackle the problem due to the ambiguity in word segmentation. The word boundary identification problem is then defined as a part of others for performing the unified language processing in total. To exhibit the ability in conducting the unified language processing, we selectively study the tasks of language identification, word extraction, and dictionary-less search engine.