1 / 31

AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS

AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS. Master Thesis. Michal Trna michal.trna@gmail.com. = Overview =. Introduction to RNN Demo of the tool Application on the chosen domain.

ardara
Télécharger la présentation

AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna michal.trna@gmail.com

  2. = Overview = • Introduction to RNN • Demo of the tool • Application on the chosen domain AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS

  3. = Introduction to NN & RNN = • Motivation of the NN • Brain contains 50–100 billion neurons • 1000 trillion synaptic connections • Solvescomplex problems • Recognition of complex forms • Forms well-founded predictions ↑ Contours of the human brain Drawing of neurons from the cerebellum of a pigeon by Ramón y Cajal (1911) →

  4. Axon terminal Nucleus Axon Dendrites = Introduction to NN & RNN = • Non-local connection • Plasticity, synaptic learning • Creation and atrophy of the connections Action potential 1-100m/s

  5. = Introduction to NN & RNN = Hebb’s law: • When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased. Donald O. Hebb, 1949 i.e.: • Cells that fire together, wire together. • Hebbian learning / Synaptic learning • Anti-Hebbian learning

  6. Bias Neuron j Summing junction Output Σ f xj Inputs . . . Activation function Synaptic weights Recipients of the output = Introduction to NN & RNN = • Mathematical model of neuron

  7. = Introduction to NN & RNN = • Artificial neural networks • A neural network is a massively parallel distributed processor that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: • 1. Knowledge is acquired by the network through a learning process. • 2. Interneuron connection strengths known as synaptic weights are used to store the knowledge.

  8. = Introduction to NN & RNN = • Artificial neural network • Properties • Adaptability • Fault tolerance • Knowledge representation, context • Non-linearity • I/O mapping

  9. = Introduction to NN & RNN = • Hebbian theory • For p patterns of length n:

  10. = Introduction to NN & RNN = • Feed-forward neural networks • Recursive neural networks

  11. Bias Neuron j Summing junction Output Σ f xj Inputs . . . Activation function Synaptic weights = Introduction to NN & RNN = • Perceptron

  12. = Introduction to NN & RNN = • Perceptron • Separability, linear classifier • XOR problem ↑ Linear separation of logical AND, logical OR and logical XOR

  13. = Introduction to NN & RNN = • Multilayer perceptron

  14. = Introduction to NN & RNN = • Multi-layer perceptron • Learning algorithm = back-propagation • generate the output • propagates back to produce deltas of all output and hidden layers • gradient of weights • modify the weight in the (opposite) direction of grad.

  15. Single layer Three layers Two layers Arbitrary set XOR-like set = Introduction to NN & RNN = • Single-layer and Multi-layer perceptron

  16. = Introduction to NN & RNN = • Recurrent networks (RNN) • Simple RNN: Elman/Jordan network • Fully connected: Hopfield network

  17. Context layer = Introduction to NN & RNN = • Elman network

  18. Context layer = Introduction to NN & RNN = • Jordan network

  19. = Introduction to NN & RNN = • Hopfield Networks • Dynamic equation

  20. = Introduction to NN & RNN = • Synaptic potential, threshold • Mode of operation • Synchronous • Asynchronous • Deterministic • Non-deterministic • Energy • Autoassociative memory • Capacity: 0.15 N

  21. = Graph Approach = • Graph approach • Acquiring pattern ξ: • Hopfield network:

  22. Red component Blue component = Graph Approach = • Coloring

  23. = Graph Approach = • Tetrahedral property

  24. 1 0 0 –1 0 1 1 –1 1 0 0 –1 1 0 1 –1 1 1 1 1 0 –1 –1 –1 = Graph Approach = • Tetrahedral property • Four possible configurations

  25. = Graph Approach = • Parameters

  26. = Graph Approach = • Energy point, projection to 2D • Energy lines • classes • Scalar energy • Control of the convergence

  27. = Graph Approach = • Relative weight of neuron • contribution of this neuron to the component I or O • Deviation • “a hash function”

  28. = Graph Approach = • Thresholds

  29. = Tool = • Time for a demo • http://msc.michaltrna.info/markers/index.html ↑ Typical convergence path

  30. Outlooks, future lines • To use deviation for discrimination of parasitic states • Quantify the results • Application on automatic trading

  31. Thank you for your attention! Time for your questions

More Related