1 / 28

Pattern Associators, Generalization, Processing

Pattern Associators, Generalization, Processing. Psych 85-419/719 Feb 6, 2001. A Pattern Associator. Consists of a set of input units, and output units, and connections from input to output. .. And a training set of examples, consisting of inputs and their corresponding outputs.

Télécharger la présentation

Pattern Associators, Generalization, Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pattern Associators,Generalization, Processing Psych 85-419/719 Feb 6, 2001

  2. A Pattern Associator • Consists of a set of input units, and output units, and connections from input to output. • .. And a training set of examples, consisting of inputs and their corresponding outputs

  3. Simple GeneralizationIn Hebbian Learning

  4. The Dot Product • The sum of the products of elements of two vectors • When normalized for length, is basically the correlation between the vectors • The angle between the vectors is the inverse cosine of the dot product • When the dot product is 0 (or, angle is 90 degrees), vectors are orthogonal

  5. Geometrically... (1,1) (1,-1)

  6. So, Generalization in Hebb... • After single learning trial, generalization is proportional to: • Output from trained trial, and • Correlation between new test input and learned input

  7. After Multiple Training Trials.. • Output of a test pattern is a function of the sum of all dot products between test input and all training input patterns, multiplied by the output that each test pattern produced.

  8. Properties of Hebb Generalization • If input is uncorrelated with all training inputs, output is zero • Otherwise, is weighted average of all outputs from all training trials • Weighted by correlations with inputs on training trials • If all training trials orthogonal to each other, no cross-talk

  9. Cross-Talk in Delta Rule Learning • Suppose we learn a given pattern in the delta rule • What happens when we present a test pattern that is similar to that learned pattern? • Difference in output is a function of error on the learned pattern, and dot product of learned input and test input

  10. What Does This Mean? • When our new item is similar to what we’ve been trained on, learning is easier if the output we want is close to the output we get from other examples. • So, regular items (ones that have similar input-output relationships) don’t need a lot of training • Exceptions need more training.

  11. Frequency of Regularsand Exceptions

  12. Constraints on Learning • With Hebb rule, each training input needs to be orthogonal to every other one in order to be separable and avoid cross-talk • With delta rule, the inputs just have to be linearly independent from each other to prevent one training trial from wrecking what was learned on other trials • Linearly independent: can’t produce vector A by multiplying vector B by a scalar

  13. Orthogonal vectors (-.5,1) (1,1) (-1,.5) Linearly independent, but not orthogonal (1,-1) More Geometry...

  14. Different Types of ActivationFunctions • Linear: output of a unit is simply the summed input to it • Linear threshold: output of a unit is summed input, but not above or below a threshold • Stochastic: Roll dice as to what output is based on input • Sigmoid: 1/(1+exp(-net))

  15. For linear units, Equals 1. Delta Rule for Non-Linear Units Otherwise, it’s the derivative of our activation function f

  16. So Delta Rule Works WellFor Any Activation Functionf that is differentiable • Linear: easily differentiable • Sigmoid: easily differentiable • Threshold…. Not so much so. • (what about other error functions besides sum-squared?)

  17. Minimizing Sum SquaredError • With unambiguous input, will converge to correct output • With ambiguous or noisy input, will converge to output that minimizes average squared distance from all targets • This is effectively regression! • Can read outputs as a probability distribution (recall IA Reading Model)

  18. Regression vs. Winner-Take-All • In Jets and Sharks model, activating gang node activated “winner” in the age group • Other ages suppressed • In delta rule, output is proportional to the statistics of the training set • Which is better?

  19. The Ideas From Ch 11 • We can think of patterns being correlated over units, rather than units correlated over patterns • Same with targets • Based on this, we can see how much cross talk there is between inputs, or weights, or outputs

  20. 1.0 0.4 0.2 0.4 1.0 0.3 0.2 0.3 1.0 1.0 0.1 0.0 0.1 1.0 0.1 0.0 0.1 1.0 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0 As Learning Progresses,Weights Become Aligned WithTargets

  21. Performance Measures • Sum squared error • tss is total sum squared error • pss is sum squared error for the current pattern • Can also compute vector differences between actual output and target output • ndp is normalized dot product. • nvl is normalized vector length. Magnitude of output vector. • vcor is the correlation, ignoring magnitude

  22. Unpacking... • Suppose our targets were -1,1,-1 and our output was -0.5,0.5,-0.5 • vcor, the correlation ignoring length, is perfect (1.0) • Length (nvl) is less than 1; output is not at full magnitude • So, overall performance (ndp) is not 1.

  23. Back to Generalization • Two layer delta rule networks are great for picking up on regularities • Can’t do XOR problems • Recall: regular and exception items (GAVE, WAVE, PAVE… HAVE) • Are exceptions a form of XOR?

  24. XOR and Exceptions • Depends on your representation. • With localist word units (for example), they are linearly independent, and hence learnable. • … but you don’t get decent generalization with localist representations! • This state of affairs led many to conclude that there were two systems for learning regulars and exceptions

  25. Evidence for Two Systems • Phonological dyslexics: impaired at rule application, more or less ok at exceptions • Surface dyslexics: ok at rule application, poor at exceptions • Conclusion of many: there are two systems. One performs rule association and learns rules. Other has localist word nodes. Handles exceptions.

  26. History of the Argument • When this two-system was put forward, it was not known how to train a network to handle XOR problems. • Existing symbolic models also could pick up rules, but needed something else for exceptions. • BUT: Starting next week, we’ll talk about learning rules that can handle the XOR problem.

  27. Pronunciation Two Layer Association. Delta Rule “Lexical” Rep Word input The Zorzi et al. Model

  28. Topic: Distributed Representations Read PDP1, Chapter 3. Optional: Handout, Plaut & McClelland, Stipulating versus discovering representations Optional: Science article Sparse population coding of faces in inferiotemporal cortex Look over homework #2 For Thursday…

More Related