1 / 26

November 2008_Neural_Computing_Systems.SuperUnsuperppt

Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms.<br>

YouYin
Télécharger la présentation

November 2008_Neural_Computing_Systems.SuperUnsuperppt

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Financial Informatics –XVII:Unsupervised Learning Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2, IRELAND November 19th, 2008. https://www.cs.tcd.ie/Khurshid.Ahmad/Teaching.html 1

  2. Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. ERROR-CORRECTION OR PERFORMANCE LEARNING HEBBIAN OR COINCIDENCE LEARNING BOLTZMAN LEARNING (STOCHASTIC NET LEARNING) COMPETITIVE LEARNING FILTER LEARNING (GROSSBERG'S NETS)

  3. Preamble Neural Networks 'learn' by adapting in accordance with a training regimen: Five key algorithms. California sought to have the license of one of the largest auditing firms (Ernst & Young) removed because of their role in the well-publicized collapse of Lincoln Savings & Loan Association. Further, regulators could use a bankruptcy

  4. ANN Learning Algorithms

  5. ANN Learning Algorithms

  6. ANN Learning Algorithms

  7. Hebbian Learning DONALD HEBB, a Canadian psychologist, was interested in investigating PLAUSIBLE MECHANISMS FOR LEARNING AT THE CELLULAR LEVELS IN THE BRAIN. (see for example, Donald Hebb's (1949) The Organisation of Behaviour. New York: Wiley)

  8. HEBB’s POSTULATE: When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic changes take place in one or both cells such that A's efficiency as one of the cells firing B, is increased. Hebbian Learning

  9. Hebbian Learning laws CAUSE WEIGHT CHANGES IN RESPONSE TO EVENTS WITHIN A PROCESSING ELEMENT THAT HAPPEN SIMULTANEOUSLY. THE LEARNING LAWS IN THIS CATEGORY ARE CHARACTERIZED BY THEIR COMPLETELY LOCAL - BOTH IN SPACE AND IN TIME-CHARACTER. Hebbian Learning

  10. Hebbian Learning LINEAR ASSOCIATOR: A substrate for Hebbian Learning Systems y’1 y1 y’2 y2 y’3 y3 Output y’ w11 w12 w13 w21 w22 w23 w31 w32 w33 Input x x1 x2 x3

  11. Hebbian Learning A simple form of Hebbian Learning Rule where h is the so-called rate of learning and x and y are the input and output respectively. This rule is also called the activity product rule.

  12. Hebbian Learning A simple form of Hebbian Learning Rule If there are "m"pairs of vectors, to be stored in a network ,then the training sequence will change the weight-matrix, w, from its initial value of ZERO to its final state by simply adding together all of the incremental weight change caused by the "m" applications of Hebb's law:

  13. Hebbian Learning A worked example:Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

  14. Hebbian Learning A worked example:Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

  15. Hebbian Learning A worked example:Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

  16. Hebbian Learning A worked example:Consider the Hebbian learning of three input vectors: in a network with the following initial weight vector:

  17. Hebbian Learning The worked example shows that with discrete f(net) and =1, the weight change involves ADDING or SUBTRACTING the entire input pattern vectors to and from the weight vectors respectively. Consider the case when the activation function is a continuous one. For example, take the bipolar continuous activation function:

  18. Hebbian Learning The worked example shows that with bipolar continuous activation function indicates that the weight adjustments are tapered for the continuous function but are generally in the same direction:

  19. Hebbian Learning The details of the computation for the three steps with a discrete bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:

  20. Hebbian Learning The details of the computation for the three steps with a continuous bipolar activation function are presented below in the notes pages. The input vectors and the initial weight vector are:

  21. Hebbian Learning Recall that the simple form of Hebbian learning law suggests that the repeated application of the presynaptic signal xjleads to an increase in yk and therefore exponential growth that finally drives the synaptic connection into saturation. A number of researchers have proposed ways in which such saturation can be avoided. Sejnowski has suggested that

  22. Hebbian Learning The Hebbian synapse described below is said to involve the use of POSITIVE FEEDBACK.

  23. Hebbian Learning What is the principal limitation of this simplest form of learning? The above equation suggests that the repeated application of the input signal leads to an increase in , and therefore exponential growth that finally drives the synaptic connection into saturation. At that point of saturation no information cannot be stored in the synapse and selectivity will be lost. Graphically the relationship with the postsynaptic activityis a simple one: it is linear with a slope .

  24. Hebbian Learning The so-called covariance hypothesis was introduced to deal with the principal limitation of the simplest form of Hebbian learning and is given as where and denote the time-averaged values of the pre-synaptic and postsynaptic signals.  

  25. Hebbian Learning If we expand the above equation: the last term in the above equation is a constant and the first term is what we have for the simplest Hebbian learning rule:

  26. Hebbian Learning Graphically the relationship ∆wij with the postsynaptic activity yk is still linear but with a slope and the assurance that the straight line curve changes its rate of change at and the minimum value of the weight change ∆wij is

More Related