For Full-Text PDF, please login, if you are a member of IEICE,|
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
Lip Location Normalized Training for Visual Speech Recognition
Oscar VANEGAS Keiichi TOKUDA Tadashi KITAMURA
IEICE TRANSACTIONS on Information and Systems
Publication Date: 2000/11/25
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Speech and Hearing
hidden Markov model, lip location normalization, lipreading, Tulips1, M2VTS,
Full Text: PDF(2.4MB)>>
This paper describes a method to normalize the lip position for improving the performance of a visual-information-based speech recognition system. Basically, there are two types of information useful in speech recognition processes; the first one is the speech signal itself and the second one is the visual information from the lips in motion. This paper tries to solve some problems caused by using images from the lips in motion such as the effect produced by the variation of the lip location. The proposed lip location normalization method is based on a search algorithm of the lip position in which the location normalization is integrated into the model training. Experiments of speaker-independent isolated word recognition were carried out on the Tulips1 and M2VTS databases. Experiments showed a recognition rate of 74.5% and an error reduction rate of 35.7% for the ten digits word recognition M2VTS database.