Silent Speech Interface Using Ultrasonic Doppler Sonar

Ki-Seung LEE  

IEICE TRANSACTIONS on Information and Systems   Vol.E103-D   No.8   pp.1875-1887
Publication Date: 2020/08/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2019EDP7211
Type of Manuscript: PAPER
Category: Speech and Hearing
silent speech interface,  ultrasonic Doppler,  deep neural networks,  

Full Text: PDF(788.1KB)>>
Buy this Article

Some non-acoustic modalities have the ability to reveal certain speech attributes that can be used for synthesizing speech signals without acoustic signals. This study validated the use of ultrasonic Doppler frequency shifts caused by facial movements to implement a silent speech interface system. A 40kHz ultrasonic beam is incident to a speaker's mouth region. The features derived from the demodulated received signals were used to estimate the speech parameters. A nonlinear regression approach was employed in this estimation where the relationship between ultrasonic features and corresponding speech is represented by deep neural networks (DNN). In this study, we investigated the discrepancies between the ultrasonic signals of audible and silent speech to validate the possibility for totally silent communication. Since reference speech signals are not available in silently mouthed ultrasonic signals, a nearest-neighbor search and alignment method was proposed, wherein alignment was achieved by determining the optimal pair of ultrasonic and audible features in the sense of a minimum mean square error criterion. The experimental results showed that the performance of the ultrasonic Doppler-based method was superior to that of EMG-based speech estimation, and was comparable to an image-based method.