A Learning Algorithm for Fault Tolerant Feedforward Neural Networks

Nait Charif HAMMADI  Hideo ITO  

IEICE TRANSACTIONS on Information and Systems   Vol.E80-D   No.1   pp.21-27
Publication Date: 1997/01/25
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Issue on Fault-Tolerant Computing)
Category: Redundancy Techniques
feedforward neural network,  learning algorithm,  relevance of synaptic weights,  essential link,  open faults,  

Full Text: PDF>>
Buy this Article

A new learning algorithm is proposed to enhance fault tolerance ability of the feedforward neural networks. The algorithm focuses on the links (weights) that may cause errors at the output when they are open faults. The relevances of the synaptic weights to the output error (i.e. the sensitivity of the output error to the weight fault) are estimated in each training cycle of the standard backpropagation using the Taylor expansion of the output around fault-free weights. Then the weight giving the maximum relevance is decreased. The approach taken by the algorithm described in this paper is to prevent the weights from having large relevances. The simulation results indicate that the network trained with the proposed algorithm do have significantly better fault tolerance than the network trained with the standard backpropagation algorithm. The simulation results show that the fault tolerance and the generalization abilities are improved.