1 / 54

Broca’s area

Motor cortex. Somatosensory cortex. Sensory associative cortex. Pars opercularis. Visual associative cortex. Broca’s area. Visual cortex. Primary Auditory cortex. Wernicke’s area. Connectionist Models. [Adapted from Neural Basis of Thought and Language

buttars
Télécharger la présentation

Broca’s area

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Motor cortex Somatosensory cortex Sensory associative cortex Pars opercularis Visual associative cortex Broca’s area Visual cortex Primary Auditory cortex Wernicke’s area Connectionist Models [Adapted from Neural Basis of Thought and Language Jerome Feldman, Spring 2007, feldman@icsi.berkeley.edu

  2. Neural networks abstract from the details of real neurons • Conductivity delays are neglected • An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1) • Net input is calculated as the weighted sum of the input signals • Net input is transformed into an output signal via a simple function (e.g., a threshold function)

  3. yj wij yi xi f ti : target xi = ∑j wij yj yi = f(xi – qi) The McCullough-Pitts Neuron yj: output from unit j Wij: weight on connection from j to i xi: weighted sum of input to unit i Threshold

  4. Mapping from neuron

  5. Simple Threshold Linear Unit

  6. Simple Neuron Model 1

  7. a = x1w1+x2w2+x3w3... +xnwn a= 1*x1 + 0.5*x2 +0.1*x3 x1 =0, x2 = 1, x3 =0 Net(input) = f = 0.5 Threshold bias = 1 Net(input) – threshold bias< 0 Output = 0 A Simple Example .

  8. Simple Neuron Model 1 1 1 1

  9. Simple Neuron Model 1 1 1 1 1

  10. Simple Neuron Model 0 1 1 1

  11. Simple Neuron Model 0 1 0 1 1

  12. BIAS UNIT With X0 = 1 Different Activation Functions • Threshold Activation Function (step) • Piecewise Linear Activation Function • Sigmoid Activation Funtion • Gaussian Activation Function • Radial Basis Function

  13. Types of Activation functions

  14. The Sigmoid Function y=a x=neti

  15. The Sigmoid Function Output=1 y=a Output=0 x=neti

  16. The Sigmoid Function Output=1 Sensitivity to input y=a Output=0 x=neti

  17. Changing the exponent k K >1 K < 1

  18. Radial Basis Function

  19. Stochastic units • Replace the binary threshold units by binary stochastic units that make biased random decisions. • The “temperature” controls the amount of noise temperature

  20. Types of Neuron parameters • The form of the input function - e.g. linear, sigma-pi (multiplicative), cubic. • The activation-output relation - linear, hard-limiter, or sigmoidal. • The nature of the signals used to communicate between nodes - analog or boolean. • The dynamics of the node - deterministic or stochastic.

  21. Computing various functions • McCollough-Pitts Neurons can compute logical functions. • AND, NOT, OR

  22. i1 w01 w02 i2 y0 b=1 w0b x0 f Computing other functions: the OR function • Assume a binary threshold activation function. • What should you set w01, w02 and w0b to be so that you can get the right answers for y0?

  23. i2 i1 Many answers would work y = f (w01i1 + w02i2 + w0bb) recall the threshold function the separation happens when w01i1 + w02i2 + w0bb = 0 move things around and you get i2 = - (w01/w02)i1 - (w0bb/w02)

  24. Decision Hyperplane • The two classes are therefore separated by the `decision' line which is defined by putting the activation equal to the threshold. • It turns out that it is possible to generalise this result to TLUs with n inputs. • In 3-D the two classes are separated by a decision-plane. • In n-D this becomes a decision-hyperplane.

  25. Linearly separable patterns PERCEPTRON is an architecture which can solve this type of decision boundary problem. An "on" response in the output node represents one class, and an "off" response represents the other. Linearly Separable Patterns

  26. The Perceptron

  27. The Perceptron Input Pattern

  28. The Perceptron Input Pattern Output Classification

  29. A Pattern Classification

  30. Pattern Space • The space in which the inputs reside is referred to as the pattern space. • Each pattern determines a point in the space by using its component values as space-coordinates. • In general, for n-inputs, the pattern space will be n-dimensional.

  31. The XOR Function

  32. The Input Pattern Space

  33. The Decision planes From: S. Harris Computer Cartoons http://www.sciencecartoonsplus.com/galcomp2.htm

  34. Multi-layer Feed-forward Network

  35. Pattern Separation and NN architecture

  36. A B C Triangle nodes and McCullough-Pitts Neurons? A B C

  37. Representing concepts using triangle nodes triangle nodes: when two of the neurons fire, the third also fires

  38. Basic Ideas • Parallel activation streams. • Top down and bottom up activation combine to determine the best matching structure. • Triangle nodes bind features of objects to values • Mutual inhibition and competition between structures • Mental connections are active neural connections

  39. Bottom-up vs. Top-down Processes • Bottom-up: When processing is driven by the stimulus • Top-down: When knowledge and context are used to assist and drive processing • Interaction: The stimulus is the basis of processing but almost immediately top-down processes are initiated

  40. Stroop Effect • Interference between form and meaning

  41. Name the words BookCarTableBoxTrashManBed CornSitPaperCoin Glass HouseJar KeyRugCatDoll Letter BabyTomato CheckPhone Soda DishLampWoman

  42. Name the print color of the words BlueGreenRed YellowOrangeBlackRed PurpleGreenRedBlueYellowBlackRed GreenWhiteBlueYellow Red BlackBlue WhiteRed Yellow GreenBlackPurple

  43. Connectionist ModelMcClelland & Rumelhart (1981) • Knowledge is distributed and processing occurs in parallel, with both bottom-up and top-down influences • This model can explain the Word-Superiority Effect because it canaccount for context effects

  44. Connectionist Model of Word Recognition

  45. TRACE predictions Do rhymes compete? • Cohort (Marlsen-Wilson): • onset similarity is primary because of the incremental nature of speech • Cat activates cap, cast, cattle, camera, etc. • NAM (Neighborhood Activation Model): • global similarity is primary • Cat activates bat, rat, cot, cast, etc. • TRACE (McClelland & Elman): • global similarity constrained by incremental nature of speech

  46. Do rhymes compete? • Temporal Sequence Learning in LTM • global similarity constrained by incremental nature of speech

  47. Semantic Features FOG DOG CAT RAT MAT f r d k m ae o t g Onsets Vowels Codas A 2-step Lexical Model

  48. Linking memory and tasks From: S. Harris Computer Cartoons http://www.sciencecartoonsplus.com/galcomp2.htm

  49. What happens if you want to represent a group? How many persons can you represent with n bits? 2^n What happens if one neuron dies? How many persons can you represent with n bits? n Distributed vs Local Representation

More Related