
For FullText PDF, please login, if you are a member of IEICE,
or go to Pay Per View on menu list, if you are a nonmember of IEICE.

Small Number of Hidden Units for ELM with TwoStage Linear Model
Hieu Trung HUYNH Yonggwan WON
Publication
IEICE TRANSACTIONS on Information and Systems
Vol.E91D
No.4
pp.10421049 Publication Date: 2008/04/01 Online ISSN: 17451361
DOI: 10.1093/ietisy/e91d.4.1042 Print ISSN: 09168532 Type of Manuscript: PAPER Category: Data Mining Keyword: neural networks, single hiddenlayer feedforward neural networks, extreme learning machine, leastsquares scheme, linear model,
Full Text: PDF(217.3KB)>>
Summary:
The singlehiddenlayer feedforward neural networks (SLFNs) are frequently used in machine learning due to their ability which can form boundaries with arbitrary shapes if the activation function of hidden units is chosen properly. Most learning algorithms for the neural networks based on gradient descent are still slow because of the many learning steps. Recently, a learning algorithm called extreme learning machine (ELM) has been proposed for training SLFNs to overcome this problem. It randomly chooses the input weights and hiddenlayer biases, and analytically determines the output weights by the matrix inverse operation. This algorithm can achieve good generalization performance with high learning speed in many applications. However, this algorithm often requires a large number of hidden units and takes long time for classification of new observations. In this paper, a new approach for training SLFNs called leastsquares extreme learning machine (LSELM) is proposed. Unlike the gradient descentbased algorithms and the ELM, our approach analytically determines the input weights, hiddenlayer biases and output weights based on linear models. For training with a large number of input patterns, an online training scheme with subblocks of the training set is also introduced. Experimental results for real applications show that our proposed algorithm offers high classification accuracy with a smaller number of hidden units and extremely high speed in both learning and testing.

