On a Code-Excited Nonlinear Predictive Speech Coding (CENLP) by Means of Recurrent Neural Networks

Ni MA  Tetsuo NISHI  Gang WEI  

IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E81-A   No.8   pp.1628-1634
Publication Date: 1998/08/25
Online ISSN: 
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section on Digital Signal Processing)
nonlinear prediction,  fully connected recurrent neural networks,  vector quantization,  speech coding,  

Full Text: PDF(611.1KB)>>
Buy this Article

To improve speech coding quality, in particular, the long-term dependency prediction characteristics, we propose a new nonlinear predictor, i. e. , a fully connected recurrent neural network (FCRNN) where the hidden units have feedbacks not only from themselves but also from the output unit. The comparison of the capabilities of the FCRNN with conventional predictors shows that the former has less prediction error than the latter. We apply this FCRNN instead of the previously proposed recurrent neural networks in the code-excited predictive speech coding system (i. e. , CELP) and shows that our system (FCRNN) requires less bit rate/frame and improves the performance for speech coding.