A Hardware Implementation of a Neural Network Using the Parallel Propagated Targets Algorithm

Anthony V. W. SMITH  Hiroshi SAKO  

IEICE TRANSACTIONS on Information and Systems   Vol.E77-D   No.4   pp.516-527
Publication Date: 1994/04/25
Online ISSN: 
Print ISSN: 0916-8532
Type of Manuscript: Special Section PAPER (Special Issue on Neurocomputing)
Category: Hardware
neural computing,  parallel computation,  

Full Text: PDF>>
Buy this Article

This document describes a proposal for the implementation of a new VLSI neural network technique called Parallel Propagated Targets (PPT). This technique differs from existing techniques because all layer, within a given network, can learn simultaneously and not sequentially as with the Back Propagation algorithm. the Parallel Propagated Target algorithm uses only information local to each layer and therefore there is no backward flow of information within the network. This allows a simplification in the system design and a reduction in the complexity of implementation, as well as acheiving greater efficiency in terms of computation. Since all synapses can be calculated simultaneously it is possible using the PPT neural algorithm, to parallelly compute all layers of a multi-layered network for the first time.