What are the Essential Cues for Understanding Spoken Language?

Steven GREENBERG  Takayuki ARAI  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E87-D   No.5   pp.1059-1070
Publication Date: 2004/05/01
Online ISSN: 
DOI: 
Print ISSN: 0916-8532
Type of Manuscript: INVITED PAPER (Special Section on Speech Dynamics by Ear, Eye, Mouth and Machine)
Category: 
Keyword: 
speech perception,  intelligibility,  syllables,  modulation spectrum,  auditory system,  auditory-visual integration,  

Full Text: PDF(1.4MB)>>
Buy this Article




Summary: 
Classical models of speech recognition assume that a detailed, short-term analysis of the acoustic signal is essential for accurately decoding the speech signal and that this decoding process is rooted in the phonetic segment. This paper presents an alternative view, one in which the time scales required to accurately describe and model spoken language are both shorter and longer than the phonetic segment, and are inherently wedded to the syllable. The syllable reflects a singular property of the acoustic signal -- the modulation spectrum -- which provides a principled, quantitative framework to describe the process by which the listener proceeds from sound to meaning. The ability to understand spoken language (i.e., intelligibility) vitally depends on the integrity of the modulation spectrum within the core range of the syllable (3-10 Hz) and reflects the variation in syllable emphasis associated with the concept of prosodic prominence ("accent"). A model of spoken language is described in which the prosodic properties of the speech signal are embedded in the temporal dynamics associated with the syllable, a unit serving as the organizational interface among the various tiers of linguistic representation.