A Fast Neural Network Learning with Guaranteed Convergence to Zero System Error

Teruo AJIMURA  Isao YAMADA  Kohichi SAKANIWA  

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E79-A   No.9   pp.1433-1439
Publication Date: 1996/09/25
Online ISSN: 
DOI: 
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section on Information Theory and Its Applications)
Category: Stochastic Process/Learning
Keyword: 
neural network,  local minimum,  dimension expansion,  zero system error,  

Full Text: PDF(442KB)>>
Buy this Article




Summary: 
It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.