Principal Component Analysis by Homogeneous Neural Networks, Part : Analysis and Extensions of the Learning Algorithms

Erkki OJA  Hidemitsu OGAWA  Jaroonsakdi WANGVIWATTANA  

Publication
IEICE TRANSACTIONS on Information and Systems   Vol.E75-D   No.3   pp.376-382
Publication Date: 1992/05/25
Online ISSN: 
DOI: 
Print ISSN: 0916-8532
Type of Manuscript: PAPER
Category: Bio-Cybernetics
Keyword: 
feature extraction,  data compression,  neural networks,  learning algorithms,  

Full Text: PDF>>
Buy this Article




Summary: 
Artificial neurons and neural networks have been shown to perform Principal Component Analysis (PCA) when gradient ascent learning rules are used, which are related to the constrained maximization of statistical objective functions. Due to their parallelism and adaptivity to input data, such algorithms and their implementations in neural networks are potentially useful in feature extraction and data compression. In the companion paper(9), two such learning rules were derived from two criteria, the Subspace Criterion and the Weighted Subspace Criterion. It was shown that the only solutions to the latter problem are dominant eigenvectors of the data covariance matrix, which are the basis vectors of PCA. It was suggested by a simulation that the corresponding learning algorithm converges to these eigenvectors. A homogeneous neural network implementation was proposed for the algorithm. The learning algorithm is analyzed here in detail and it is shown that it can be approximated by a continuous-time differential equation that is obtained by averaging. It is shown that the asymptotically stable limits of this differntial equation are the eigenvectors. The neural network learning algorithm is further extended to a case in which each neuron has a sigmoidal nonlinear feedback activity function. Then no parameters specific to each neuron are needed, and the learning rule is fully homogeneous.