1 / 97

Neural networks for data mining

Neural networks for data mining. Eric Postma MICC-IKAT Universiteit Maastricht. Overview. Introduction: The biology of neural networks the biological computer brain-inspired models basic notions Interactive neural-network demonstrations Perceptron Multilayer perceptron

clara
Télécharger la présentation

Neural networks for data mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural networksfor data mining Eric Postma MICC-IKAT Universiteit Maastricht

  2. Overview Introduction: The biology of neural networks • the biological computer • brain-inspired models • basic notions Interactive neural-network demonstrations • Perceptron • Multilayer perceptron • Kohonen’s self-organising feature map • Examples of applications

  3. A typical AI agent

  4. Two types of learning • Supervised learning • curve fitting, surface fitting, ... • Unsupervised learning • clustering, visualisation...

  5. An input-output function

  6. Fitting a surface to four points

  7. Regression

  8. Classification

  9. The history of neural networks • A powerful metaphor • Several decades of theoretical analyses led to the formalisation in terms of statistics • Bayesian framework • We discuss neural networks from the original metaphorical perspective

  10. (Artificial) neural networks The digital computer versus the neural computer

  11. The Von Neumann architecture

  12. The biological architecture

  13. Digital versus biological computers 5 distinguishing properties • speed • robustness • flexibility • adaptivity • context-sensitivity

  14. Speed: The “hundred time steps” argument The critical resource that is most obvious is time. Neurons whose basic computational speed is a few milliseconds must be made to account for complex behaviors which are carried out in a few hudred milliseconds (Posner, 1978). This means that entire complex behaviors are carried out in less than a hundred time steps. Feldman and Ballard (1982)

  15. Graceful Degradation performance damage

  16. Flexibility: the Necker cube

  17. vision = constraint satisfaction

  18. And sometimes plain search…

  19. Adaptivitiy processing implies learning in biological computers versus processing does not imply learning in digital computers

  20. Context-sensitivity: patterns emergent properties

  21. Robustness and context-sensitivitycoping with noise

  22. The neural computer • Is it possible to develop a model after the natural example? • Brain-inspired models: • models based on a restricted set of structural en functional properties of the (human) brain

  23. The Neural Computer (structure)

  24. Neurons, the building blocks of the brain

  25. Neural activity out in

  26. Synapses,the basis of learning and memory

  27. Learning: Hebb’s rule neuron 1 synapse neuron 2

  28. Forgetting in neural networks

  29. Towards neural networks

  30. Connectivity An example: The visual system is a feedforward hierarchy of neural modules Every module is (to a certain extent) responsible for a certain function

  31. (Artificial) Neural Networks • Neurons • activity • nonlinear input-output function • Connections • weight • Learning • supervised • unsupervised

  32. Artificial Neurons • input (vectors) • summation (excitation) • output (activation) i

  33. 1 f(x) = 1 + e -x/a Input-output function • nonlinear function: a  0 f(e) a   e

  34. wAB A B Artificial Connections (Synapses) • wAB • The weight of the connection from neuron A to neuron B

  35. The Perceptron

  36. Learning in the Perceptron • Delta learning rule • the difference between the desired output tand the actual output o, given inputx • Global error E • is a function of the differences between the desired and actual outputs

  37. Gradient Descent

  38. Linear decision boundaries

  39. Minsky and Papert’s connectedness argument

  40. The history of the Perceptron • Rosenblatt (1959) • Minsky & Papert (1961) • Rumelhart & McClelland (1986)

  41. The multilayer perceptron input output one or more hidden layers

  42. Training the MLP • supervised learning • each training pattern: input + desired output • in each epoch: present all patterns • at each presentation: adapt weights • after many epochs convergence to a local minimum

  43. phoneme recognition with a MLP Output: pronunciation input: frequencies

  44. Non-linear decision boundaries

  45. Compression with an MLPthe autoencoder

  46. hidden representation

  47. Restricted Boltzmann machines (RBMs)

  48. Learning in the MLP

More Related