1 / 29

CHAPTER 4

CHAPTER 4. Perceptron Learning Rule. Objectives. How do we determine the weight matrix and bias for perceptron networks with many inputs , where it is impossible to visualize the decision boundaries ?

Télécharger la présentation

CHAPTER 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 4 Perceptron Learning Rule

  2. Objectives • How do we determine the weight matrix and bias for perceptron networks with many inputs, where it is impossible to visualize the decision boundaries? • The main object is to describe an algorithm for training perceptron networks, so that they can learn to solve classification problems.

  3. History : 1943 • Warren McCulloch and Walter Pitts introduced one of the first artificial neurons in 1943. • The main feature of their neuron model is that a weighted sum of input signals is compared to a threshold to determine the neuron output. • They went to show that networks of these neurons could, in principle, compute any arithmetic or logic function. • Unlike biological networks, the parameters of their networks had to designed, as no training method was available.

  4. History : 1950s • Frank Rosenblatt and several other researchers developed a class of neural networks called perceptrons in the late 1950s. • Rosenblatt’s key contribution was the introduction of a learning rulefor training perceptron networks to solve pattern recognition problems. • The perceptron could even learn when initialized with random values for its weights and biases.

  5. History : ~1980s • Marvin Minsky and Seymour Papert (1969) demonstrated that the perceptron networks were incapable of implementing certain elementary functions(e.g., XOR gate). • It was not until the 1980s that these limitations were overcome with improved (multilayer) perceptron networks and associated learning rules. • The perceptron network remains a fast and reliable network for the class of problems that it can solve.

  6. Learning Rule • Learning rule: a procedure(training algorithm) for modifying the weights and thebiases of a network. • The purpose of the learning rule is to train the network to perform some task. • Supervised learning, unsupervised learning and reinforcement (graded) learning.

  7. Supervised Learning • The learning rule is provided with a set ofexamples (the training set) of proper network behavior: {p1,t1}, {p2,t2},…, {pQ,tQ} where pq is an input to the network and tq is the corresponding correct (target) output. • As the inputs are applied to the network, the network outputs are compared to the targets. The learning rule is then used to adjust the weights and biases of the networkin order to move the network outputs closer to the targets.

  8. Supervised Learning

  9. Reinforcement Learning • The learning rule is similar tosupervised learning, except that, instead of being provided with the correct output for each network input, the algorithm is only given a grade. • The grade(score) is a measure of the network performance over some sequence of inputs. • It appears to be most suited to control system applications.

  10. Unsupervised Learning • The weights and biases are modified in response to network inputs only. There are no target outputs available. • Most of these algorithms perform some kind of clustering operation. They learn to categorize the input patterns into a finite number of classes. This is especially in such applications as vector quantization.

  11. The boundary is always orthogonal to W. n = 0 point toward Two-Input / Single-Neuron Perceptron

  12. The input/target pairs for the AND gate are AND Step 2: Choose a weight vector W that is orthogonal to the decision boundary Step 3: Find the bias b, e.g., picking a point on the decision boundary and satisfyingn = Wp + b = 0 Dark circle : 1 Light circle : 0 Perceptron Network Design Step 1: Select a decision boundary

  13. The given input/target pairs are  Two-input and one output network without a bias  the decision boundary must pass through the origin. p1 p2  The length of the weight vector does not matter; only its direction is important. p3 Dark circle : 1 Light circle : 0 Test Problem

  14. p1 p2  The network has not returned the correct value, a = 0 and t1 = 1. p3  The initial weight vector results in a decision boundary that incorrectly classifies the vector p1. Constructing Learning Rule Training begins by assigning some initial values for the network parameters. 

  15. p1 p2  Unfortunately, it is easy to construct a problem for which this rule cannot find a solution. p3 Constructing Learning Rule  One approach would be set W equal to p1 such that p1 was classified properly in the future.

  16. p1 p2 p3 Constructing Learning Rule Another way would be to addW equal to p1. Adding p1 to W would make W point more in the direction of p1.

  17. p1 p2 p3 Constructing Learning Rule  The next input vector is p2.  A class 0 vector was misclassified as a 1, a = 1 and t2 = 0.

  18. p1 p2 p3 Constructing Learning Rule  Present the 3rd vector p3  A class 0 vector was misclassified as a 1, a = 1 and t2 = 0.

  19. If we present any of the input vectors to the neuron, it will output the correct class for that input vector. The perceptron has finally learned to classify the three vectors properly.  The third and final rule: One iteration Constructing Learning Rule  Training sequence: p1 p2 p3 p1 p2 p3 

  20. Unified Learning Rule  Perceptron error: e = t – a

  21. Learning rate : Training Multiple-Neuron Perceptron

  22. Apple/Orange Recognition Problem

  23. Limitations • The perceptron can be used to classify input vectors that can be separated by a linear boundary, like AND gate example.  linearly separable (AND, OR and NOT gates) • Not linearly separable, e.g., XOR gate

  24. (1,2) (1,2) (2,0) (1,1) (2,1) (1, 1) (2, 2) (2,1) Solved Problem P4.3 Design a perceptron network to solve the next problem Class 2: t = (0,1) Class 1: t = (0,0) Class 3: t = (1,0) Class 4: t = (1,1) A two-neuron perceptron creates two decision boundaries.

  25. -3 1 1 xin xout -1 -2 yin yout 0 Solution of P4.3 Class 1 Class 2 Class 3 Class 4

  26. Solved Problem P4.5 Train a perceptron network to solve P4.3 problem using the perceptron learning rule.

  27. Solution of P4.5

  28. Solution of P4.5

  29. -2 -1 0 xin xout 0 -2 yin yout 0 Solution of P4.5

More Related