Error Correction Using Long Context Match for Smartphone Speech Recognition

Yuan LIANG  Koji IWANO  Koichi SHINODA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E98-D   No.11   pp.1932-1942
Publication Date: 2015/11/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2015EDP7179
Type of Manuscript: PAPER
Category: Speech and Hearing
Keyword: 
speech recognition,  error correction,  multimodal interface,  word confusion network,  context match,  

Full Text: PDF(674KB)
>>Buy this Article


Summary: 
Most error correction interfaces for speech recognition applications on smartphones require the user to first mark an error region and choose the correct word from a candidate list. We propose a simple multimodal interface to make the process more efficient. We develop Long Context Match (LCM) to get candidates that complement the conventional word confusion network (WCN). Assuming that not only the preceding words but also the succeeding words of the error region are validated by users, we use such contexts to search higher-order n-grams corpora for matching word sequences. For this purpose, we also utilize the Web text data. Furthermore, we propose a combination of LCM and WCN (“LCM + WCN”) to provide users with candidate lists that are more relevant than those yielded by WCN alone. We compare our interface with the WCN-based interface on the Corpus of Spontaneous Japanese (CSJ). Our proposed “LCM + WCN” method improved the 1-best accuracy by 23%, improved the Mean Reciprocal Rank (MRR) by 28%, and our interface reduced the user's load by 12%.