Uniqueness Theorem of Complex-Valued Neural Networks with Polar-Represented Activation Function

Masaki KOBAYASHI  

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E98-A   No.9   pp.1937-1943
Publication Date: 2015/09/01
Online ISSN: 1745-1337
DOI: 10.1587/transfun.E98.A.1937
Type of Manuscript: PAPER
Category: Nonlinear Problems
Keyword: 
complex-valued neural networks,  activation function,  reducibility,  uniqueness theorem,  

Full Text: PDF>>
Buy this Article




Summary: 
Several models of feed-forward complex-valued neural networks have been proposed, and those with split and polar-represented activation functions have been mainly studied. Neural networks with split activation functions are relatively easy to analyze, but complex-valued neural networks with polar-represented functions have many applications but are difficult to analyze. In previous research, Nitta proved the uniqueness theorem of complex-valued neural networks with split activation functions. Subsequently, he studied their critical points, which caused plateaus and local minima in their learning processes. Thus, the uniqueness theorem is closely related to the learning process. In the present work, we first define three types of reducibility for feed-forward complex-valued neural networks with polar-represented activation functions and prove that we can easily transform reducible complex-valued neural networks into irreducible ones. We then prove the uniqueness theorem of complex-valued neural networks with polar-represented activation functions.