
For FullText PDF, please login, if you are a member of IEICE,
or go to Pay Per View on menu list, if you are a nonmember of IEICE.

A Training Algorithm for Multilayer Neural Networks of HardLimiting Units with Random Bias
Hongbing ZHU Kei EGUCHI Toru TABATA
Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences
Vol.E83A
No.6
pp.10401048 Publication Date: 2000/06/25 Online ISSN:
DOI: Print ISSN: 09168508 Type of Manuscript: Special Section PAPER (Special Section of Papers Selected from 1999 International Technical Conference on Circuits/Systems, Computers and Communications (ITCCSCC'99)) Category: Keyword: hardlimiting, multilayer neural network, backpropagation algorithm, learning of neural networks, sigmoid and threshold functions,
Full Text: PDF>>
Summary:
The conventional backpropagation algorithm cannot be applied to networks of units having hardlimiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hardlimiting output functions, is presented. In order to get a differentiable output function for a hardlimiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional backpropagation.

