1 / 72

Part 1: Object recognition Part 2: Computational modelling

Part 1: Object recognition Part 2: Computational modelling. Jaap Murre Chapters 8, 17-18, 27 This lecture can be found at: http://neuromod.uva.nl/courses/np2000/. Object recognition. Chapters 17-18 and 27. Overview. Object recognition Apperceptive and associative agnosia Hemi-neglect

seth-hughes
Télécharger la présentation

Part 1: Object recognition Part 2: Computational modelling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part 1: Object recognitionPart 2: Computational modelling Jaap Murre Chapters 8, 17-18, 27 This lecture can be found at: http://neuromod.uva.nl/courses/np2000/

  2. Object recognition Chapters 17-18 and 27

  3. Overview • Object recognition • Apperceptive and associative agnosia • Hemi-neglect • The code of the brain • What and where pathways • Computational modelling • Introduction to neural networks • Hebbian learning • Perceptron and backpropagation

  4. Apperceptive and associative agnosia

  5. Warrington’s two-stage model of object recognition

  6. Warrington’s Unusual Views and Shadows Tests for apperceptive agnosia • Based on the following principle: • Right parietal lobe patients have problems recognizing an object if its features must be inferred or extracted from a limited perceptual input.

  7. Right hemisphere lesionUnusual Views Test

  8. Right hemisphere lesion (cont’d)Shadows Test

  9. Associative agnosia: semantic categorization is impaired

  10. Warrington’s two-stage model of object recognition

  11. Hemi-neglect

  12. Bisect all the lines…, a test for hemineglect

  13. Different visual stimulus arrays

  14. Evidence for contralateral inhibition

  15. Evidence for ipsilateral exitation

  16. Neglect distributed in objects

  17. Neglect in imaging

  18. The code of the brain Neural representations

  19. Types of neural representations • Extremely localized coding • 0000000000000000010000000000000000 • Semi-distributed or sparse coding • 0000100000100000010000000010000000 • Distributed coding • 1010111000101100110101000110111000

  20. Extremely localized coding leads to the grandmother cell

  21. Sparse coding • Forms a good middle ground between fully distributed and extremely localized coding • Is biologically plausible • Is computationally sound in that it allows very large numbers of representations with a small number of units

  22. Desimone’s study of V4* neurons * V4 is visual cortex before inferotemporal cortex (IT)

  23. Neurons in IT show evidence of ‘short-term memory’ for events Human Monkey • Delayed matching-to-sample task • Many cells reduce their firing if they match the sample in memory • Several (up to five) stimuli may intervene • The more similar the current stimulus is to the stimulus in memory

  24. Neural population response to familiar stimulus first decreases, after presentation of ‘target’, then decreases during delay period, increases during early choice, and stabilizes about 100ms before the saccade

  25. Reduced IT response and memory • Priming causes a reduction of firing in IT • This may be a reduced competition • This results in a sharpening of the population response • This in turns leads to a sparser representation

  26. Novelty filtering • Desimone et al.: IT neurons function as ‘adaptive filters’. They give their best response to features to which they are sensistive but which they have not recently seen (cf. Barlow) • This is a combination of familiarity and recency • Reduction in firing occurs when the animal (or the neuron) becomes familiar with the stimulus • This can be an effect of reduced competition

  27. What and where streams

  28. Where stream

  29. Neuron in posterior parietal cortex

  30. What stream

  31. Desimone’s study of V4* neurons * V4 is visual cortex before inferotemporal cortex (IT)

  32. A neuron in inferior temporal cortex (IT)

  33. What is known about what is located in the brain?

  34. PET data corroborate the lesion data

  35. Computational modelling Chapter 8

  36. Overview • Biological and connectionist neurons • McCulloch and Pitts • Learning • The Hebb rule • Willshaw networks • Error-correcting learning: Perceptron and backpropagation

  37. Neural networks • Based on an abstract view of the neuron • Artificial neurons are connected to form large networks • The connections determine the function of the network • Connections can often be formed by learning and do not need to be ‘programmed’

  38. Neural networks abstract from the details of real neurons • Conductivity delays are neglected • An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1) • Net input is calculated as the weighted sum of the input signals • Net input is transformed into an output signal via a simple function (e.g., a threshold function)

  39. Artificial ‘neuron’

  40. Illustration of a neural network • McClelland and Rumelhart’s (1981) model of context-effects in letter perception • Also illustrates content-addressable memory or pattern completion • Shows how one can use connectionist models

  41. Much of perception is dealing with ambiguity LAB

  42. Many interpretations are processed in parallel CAB

  43. The final interpretation must satisfy many constraints In the recognition of letters and words: i. Only one word can occur at a given position ii. Only one letter can occur at a given position iii. A letter-on-a-position activates a word iv. A feature-on-a-position activates a letter

  44. L.. C.. .A. ..P ..B i. Only one word can occur at a given position LAP CAP CAB

  45. ii. Only one letter can occur at a given position LAP CAP CAB L.. C.. .A. ..P ..B

  46. iii. A letter-on-a-position activates a word LAP CAP CAB L.. C.. .A. ..P ..B

  47. LAP CAP CAB L.. C.. .A. ..P ..B iv. A feature-on-a-position activates a letter

  48. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  49. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

  50. Recognition of a letter is a process of constraint satisfaction LAP CAP CAB L.. C.. .A. ..P ..B

More Related