Speeding up Deep Neural Networks in Speech Recognition with Piecewise Quantized Sigmoidal Activation Function

Anhao XING  Qingwei ZHAO  Yonghong YAN  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E99-D   No.10   pp.2558-2561
Publication Date: 2016/10/01
Online ISSN: 1745-1361
DOI: 10.1587/transinf.2016SLL0007
Type of Manuscript: Special Section LETTER (Special Section on Recent Advances in Machine Learning for Spoken Language Processing)
Category: Acoustic modeling
Keyword: 
deep neural networks,  speech recognition,  activation function,  fixed-point quantization,  

Full Text: PDF(151.4KB)>>
Buy this Article




Summary: 
This paper proposes a new quantization framework on activation function of deep neural networks (DNN). We implement fixed-point DNN by quantizing the activations into powers-of-two integers. The costly multiplication operations in using DNN can be replaced with low-cost bit-shifts to massively save computations. Thus, applying DNN-based speech recognition on embedded systems becomes much easier. Experiments show that the proposed method leads to no performance degradation.