Neural Learning of Chaotic System Behavior

Gustavo DECO  Bernd SCHÜRMANN  

Publication
IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences   Vol.E77-A   No.11   pp.1840-1845
Publication Date: 1994/11/25
Online ISSN: 
DOI: 
Print ISSN: 0916-8508
Type of Manuscript: Special Section PAPER (Special Section on Nonlinear Theory and Its Applications)
Category: Neural Network and Its Applications
Keyword: 
recurrent neural networks,  chaotic dynamics,  dynamical invariants,  

Full Text: PDF(479.2KB)>>
Buy this Article




Summary: 
We introduce recurrent networks that are able to learn chaotic maps, and investigate whether the neural models also capture the dynamical invariants (Correlation Dimension, largest Lyapunov exponent) of chaotic time series. We show that the dynamical invariants can be learned already by feedforward neural networks, but that recurrent learning improves the dynamical modeling of the time series. We discover a novel type of overtraining which corresponds to the forgetting of the largest Lyapunov exponent during learning and call this phenomenon dynamical overtraining. Furthermore, we introduce a penalty term that involves a dynamical invariant of the network and avoids dynamical overtraining. As examples we use the Hnon map, the logistic map and a real world chaotic series that correspond to the concentration of one of the chemicals as a function of time in experiments on the Belousov–Zhabotinskii reaction in a well–stirred flow reactor.