Lip Location Normalized Training for Visual Speech Recognition

Oscar VANEGAS  Keiichi TOKUDA  Tadashi KITAMURA  

IEICE TRANSACTIONS on Information and Systems   Vol.E83-D   No.11   pp.1969-1977
Publication Date: 2000/11/25
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Speech and Hearing
hidden Markov model,  lip location normalization,  lipreading,  Tulips1,  M2VTS,  

Full Text: PDF(2.4MB)>>
Buy this Article

This paper describes a method to normalize the lip position for improving the performance of a visual-information-based speech recognition system. Basically, there are two types of information useful in speech recognition processes; the first one is the speech signal itself and the second one is the visual information from the lips in motion. This paper tries to solve some problems caused by using images from the lips in motion such as the effect produced by the variation of the lip location. The proposed lip location normalization method is based on a search algorithm of the lip position in which the location normalization is integrated into the model training. Experiments of speaker-independent isolated word recognition were carried out on the Tulips1 and M2VTS databases. Experiments showed a recognition rate of 74.5% and an error reduction rate of 35.7% for the ten digits word recognition M2VTS database.