Automatic Language Identification with Discriminative Language Characterization Based on SVM

Hongbin SUO  Ming LI  Ping LU  Yonghong YAN  

IEICE TRANSACTIONS on Information and Systems   Vol.E91-D    No.3    pp.567-575
Publication Date: 2008/03/01
Online ISSN: 1745-1361
DOI: 10.1093/ietisy/e91-d.3.567
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Section on Robust Speech Processing in Realistic Environments)
Category: Language Identification
language identification,  supervised speaker clustering,  support vector machine,  discriminative language characterization score vector,  pair-wise posterior probability estimation,  

Full Text: PDF>>
Buy this Article

Robust automatic language identification (LID) is the task of identifying the language from a short utterance spoken by an unknown speaker. The mainstream approaches include parallel phone recognition language modeling (PPRLM), support vector machine (SVM) and the general Gaussian mixture models (GMMs). These systems map the cepstral features of spoken utterances into high level scores by classifiers. In this paper, in order to increase the dimension of the score vector and alleviate the inter-speaker variability within the same language, multiple data groups based on supervised speaker clustering are employed to generate the discriminative language characterization score vectors (DLCSV). The back-end SVM classifiers are used to model the probability distribution of each target language in the DLCSV space. Finally, the output scores of back-end classifiers are calibrated by a pair-wise posterior probability estimation (PPPE) algorithm. The proposed language identification frameworks are evaluated on 2003 NIST Language Recognition Evaluation (LRE) databases and the experiments show that the system described in this paper produces comparable results to the existing systems. Especially, the SVM framework achieves an equal error rate (EER) of 4.0% in the 30-second task and outperforms the state-of-art systems by more than 30% relative error reduction. Besides, the performances of proposed PPRLM and GMMs algorithms achieve an EER of 5.1% and 5.0% respectively.