A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias

Hongbing ZHU  Kei EGUCHI  Toru TABATA  

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E83-A   No.6   pp.1040-1048
Publication Date: 2000/06/25
Online ISSN: 
DOI: 
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section of Papers Selected from 1999 International Technical Conference on Circuits/Systems, Computers and Communications (ITC-CSCC'99))
Category: 
Keyword: 
hard-limiting,  multilayer neural network,  back-propagation algorithm,  learning of neural networks,  sigmoid and threshold functions,  

Full Text: PDF(1014.1KB)>>
Buy this Article




Summary: 
The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.