1 / 6

Using Artificial Neural Networks and Support Vector Regression to Model the Lyapunov Exponent

Using Artificial Neural Networks and Support Vector Regression to Model the Lyapunov Exponent. Adam Maus. Nonlinear Prediction. Largest Lyapunov Exponent (LE) Measure of long term predictability Models used to reconstruct LE Artificial Neural Network Support Vector Regression

cain-gibson
Télécharger la présentation

Using Artificial Neural Networks and Support Vector Regression to Model the Lyapunov Exponent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Artificial Neural Networks and Support Vector Regression to Model the Lyapunov Exponent Adam Maus

  2. Nonlinear Prediction Largest Lyapunov Exponent (LE) • Measure of long term predictability Models used to reconstruct LE • Artificial Neural Network • Support Vector Regression Henon Map (HM) • Henon Map’s LE = .42093 Strange Attractor of Hénon Map

  3. Artificial Neural Network Single Layer Feed-Forward Architecture • 8 neurons and 3 dimensions • Trained using next step prediction • Weights updated using a variant of simulated annealing • 400 training points • 1 million training epochs

  4. Support Vector Regression Global Solution to a Convex Programming Problem Uses only a Subset of Points • Points outside of ɛ-tube thrown out User-Defined Parameters • C - control flatness of the output function • ɛ - controls size of the tube • kernel function and its parameters • Training dataset size Many toolboxes available • LibSVM Toolbox s = length of w vector K = kernel function Toolbox Available At: http://tinyurl.com/sxnsePicture: http://tinyurl.com/6pscdr

  5. Dynamic and AttractorReconstruction Results • Artificial Neural Network • LE = .38431 • Mean Square Error = 4.65 x 10 • Support Vector Regression • LE = .41288 • Weighted Mean Square Error = 4.69 x 10 -5 -5 Strange Attractor of Neural Network Mean Square Error Weighted Mean Square Error c = length of time series Strange Attractor of Support Vector Regression

  6. Conclusions Hypothesis • Dynamic and Attractor Reconstruction are correlated • Neural Networks • Perform proper reconstruction given adequate training • Support Vector Regression • Performs proper reconstruction given that we weight the model so that it replicates the first points more due to the chaotic nature of the data. Strange Attractor of the Logistic Map (LM) Strange Attractor of the delayed Hénon map (DHM)

More Related