|
For Full-Text PDF, please login, if you are a member of IEICE,
or go to Pay Per View on menu list, if you are a nonmember of IEICE.
|
Neural Network Multiprocessors Applied with Dynamically Reconfigurable Pipeline Architecture
Takayuki MORISHITA Iwao TERAMOTO
Publication
IEICE TRANSACTIONS on Electronics
Vol.E77-C
No.12
pp.1937-1943 Publication Date: 1994/12/25 Online ISSN:
DOI: Print ISSN: 0916-8516 Type of Manuscript: Special Section PAPER (Special Issue on Multimedia, Analog and Processing LSIs) Category: Processors Keyword: neural network, digital processor, back-propagation, multiprocessors' configuration, computer architecture,
Full Text: PDF(507.7KB)>>
Summary:
Processing elements (PEs) with a dynamically reconfigurable pipeline architecture allow the high-speed calculation of widely used neural model which is multi-layer perceptrons with the backpropagation (BP) learning rule. Its architecture that was proposed for a single chip is extended to multiprocessors' structure. Each PE holds an element of the synaptic weight matrix and the input vector. Multi-local buses, a swapping mechanism of the weight matrix and the input vector, and transfer commands between processor elements allow the implementation of neural networks larger than the physical PE array. Estimated peak performance by the measurement of single processor element is 21.2 MCPS in the evaluation phase and 8.0 MCUPS during the learning phase at a clock frequency of 50 MHz. In the model, multi-layer perceptrons with 768 neurons and 131072 synapses are trained by a BP learning rule. It corresponds to 1357 MCPS and 512 MCUPS with 64 processor elements and 32 neurons in each PE.
|
|