1 / 48

Lecture 8: Knowledge

Psyc 317: Cognitive Psychology. Lecture 8: Knowledge. Outline. Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain. Categorization is hierarchical.

salena
Télécharger la présentation

Lecture 8: Knowledge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Psyc 317: Cognitive Psychology Lecture 8: Knowledge

  2. Outline • Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain

  3. Categorization is hierarchical • So we have levels of categories • How can all of this be represented in the mind? • Semantic network approach

  4. Collins & Quillian’s Model • Nodes are bits of information • Links connect them together Semantic network template Simple semantic network

  5. Get more complicated! • Add properties to nodes:

  6. How does the network work? • Example: Retrieve properties of canary

  7. Why not store it all at the node? • To get “can fly” and “has feathers,” you must travel up to bird • Why not put it all at canary? • Cognitive economy: Putting common properties at each node is too inefficient • More efficient to put “cannot fly” at exception nodes

  8. How do we know this works?Collins & Quillian (1969) • Ask participants about canaryproperties that require more traversal vs.

  9. Link Traversal Demo Yes or no: • A German Shepherd is a type of dog. • A German Shepherd is a mammal. • A German Shepherd barks. • A German Shepherd has skin.

  10. Collins & Quillian Results

  11. Spreading activation:Priming the Network • An activated node spreads its activation to connected links

  12. Spreading Activation WorksMeyer & Schvaneveldt (1971) • Lexical decision task: Are the two letter strings both words? Associated

  13. Meyer & Schvaneveldt Results * Associated words prime each other

  14. Collin & Quillian Criticisms • Typicality effect is not explained - ostrich and canary are one link away from bird • Incongruent results (Rips et al., 1972): – A pig is a mammal 1476 ms – A pig is an animal 1268 ms

  15. Collins & Loftus’ Model • No more hierarchy • Shorter links between more connected concepts

  16. (Dis)advantages of the model “A fairly complicated theory with enough generality to apply to results from many different experimental paradigms.” • This is bad. Why?

  17. The model is unfalsifiable • The theory explains everything – How long should links be between nodes? Result B says nodes look like this Result A says nodes look like this

  18. Everything is arbitrary • Cannot disprove theory: what does link length mean for the brain? • You can make connections as long as you want/need to explain your results

  19. Outline • Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain

  20. Connectionism is a new version of semantic network theories • McClelland & Rummelhart (1986) • Concepts are represented in networks with nodes and links – But they function a lot differently than in semantic networks • Theory is biologically based • A quick review of neurons…

  21. Physiological Basis of Connectionism • Neural circuits: Processing happens between many neurons connected by synapses • Excitatory and inhibitory connections:

  22. Physiological Basis of Connectionism • Strength of firing: Number of inputs onto a neuron (+ and -) determines rate of firing 1.5 0.2 Fires at 1.6 -0.75

  23. Distributed Coding

  24. Basics of Connectionism • Instead of nodes, you have units – Units are “neuronlike processing units” • Units are connected together • Parallel Distributed Processing (PDP) – Activation occurs in parallel – Processing occurs in many units

  25. Basic PDP network Mental representation Processing Weights 5.6 From the environment

  26. How a PDP network works • Give the network stimuli via the input units • Information is passed through the network by hidden units – Weights affect activation of nodes • Eventually, the stimulus is represented as a pattern via the output units

  27. Example output • The brain represents things from the environment differently

  28. PDP Learning: Stage 1 • Give it input, get output

  29. Learning: Error signals • The output pattern is not the correct pattern • Figure out what the difference is – That difference is the error signal • Use the error signal to fine-tune weights • Error signal is sent back using back propagation

  30. Learning : Stage 2 • Back propagate error signal through network, adjust weights 5.7 5.2

  31. Learning: Stage 3, 4, 5… 1024 • Now that weights are adjusted, give network same input • Lather, rinse, repeat until error signal is 0

  32. So this is learning? • Repeated input and back propagation changes weights between units • When error signal = 0, the network has learned the correct weights for that stimulus – The network has been trained

  33. So where is the knowledge? • Semantic networks – One node has “canary” and is connected to “can fly” and “yellow” • PDP networks – A bunch of nodes together represent “canary” and another bunch represent “yellow” – Distributed knowledge in neural circuits

  34. PDP: The GoodNetworks based on neurons • All nodes can do is fire (they are dumb) • Knowledge is distributed amongst many nodes • Sounds a lot like neurons and the brain! • Emergence: Lots of little dumb things form one big smart thing

  35. PDP: The GoodNetworks are damage-resistant • “Lesion” the network by taking out nodes • This damage does not totally take out the system – Graceful degradation • These networks can adapt to damage

  36. PDP: The GoodLearning can be generalized • Related concepts should activate many of the same nodes – Robin and sparrow should share a lot of the same representation • PDP networks can emulate this – similar inputs can operate with similar networks

  37. PDP: The GoodSuccessful computer models • Not just a theory, but can be programmed in a computer • Computational modeling of the mind – Object perception – Recognizing words

  38. PDP: The BadCannot explain everything • More complex tasks cannot be explained – Problem solving – Language processing • Limitation of computers? – We have trillions of neurons – PDP networks can’t support that many nodes (yet)

  39. PDP: The BadRetroactive interference • Learning something new interferes with something already learned Example: Train network on “collie” – Weights are perfectly adjusted for collie • Give network “terrier” – Network must change weights again for terrier • Weights must change to accommodate both dogs

  40. PDP: The BadCannot explain rapid learning • It does not take thousands of trials to remember that you parked in Lot K – How does rapid learning occur? • Two separate systems?

  41. How the connectionists explain rapid learning • Two separate systems PDP in the Cortex: Rapid learning in the Hippocampus:

  42. Outline • Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain

  43. Categories in the brain • Imaging studies have localized face and house areas – Still not very exciting (“light-up” studies) • Does this mean one brain area processes houses, another one for heads, and chairs, and technology, etc. etc.?

  44. Visual agnosia for categories • Damage to inferior temporal cortex causes inability to name certain objects – Visual agnosia • Double dissociation for living/nonliving things

  45. Double Dissociation • Double dissociation for living/nonliving things

  46. Living vs. Non-living? • fMRI studies have shown different brain areas for living and non-living things • There is a lot of overlap for the two areas, though • Damage for categories is not well understood

  47. Category-specific neurons • Some neurons only respond to certain categories • A “Bill Clinton” neuron? Probably not. • A “Bill Clinton” neural circuit? More likely.

  48. Not categories, but continuum • There are probably no distinct face, house, chair, etc. areas in the brain • But everything’s not all stored in the same place, either • A mix of overlapping areas and distributed processes – Living vs. non-living is a big distinction

More Related