1 / 40

Statistical learning and optimal control: A framework for biological learning and motor control

Statistical learning and optimal control: A framework for biological learning and motor control Lecture 2: Models of biological learning and sensory-motor integration Reza Shadmehr Johns Hopkins School of Medicine. Various forms of classical conditioning in animal psychology.

annora
Télécharger la présentation

Statistical learning and optimal control: A framework for biological learning and motor control

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical learning and optimal control: A framework for biological learning and motor control Lecture 2: Models of biological learning and sensory-motor integration Reza Shadmehr Johns Hopkins School of Medicine

  2. Various forms of classical conditioning in animal psychology Not explained by LMS, but predicted by the Kalman filter. Table from Peter Dayan

  3. Kalman filter as a model of animal learning Suppose that x represents inputs from the environment: a light and a tone. Suppose that y represents rewards, like a food pellet. Animal’s model of the experimental setup Animal’s expectation on trial n Animal’s learning from trial n

  4. 1 y * x2 x1 0 0 10 20 30 40 yhat y 1.5 1.5 1 1 0.5 0.5 yhat y 0 0 10 20 30 40 10 20 30 40 0.5 0.7 0.8 0.45 0.6 0.4 0.5 0.6 0.35 0.4 0.4 0.3 0.4 0.3 0.35 0.2 0.25 P22 0.2 P11 0.1 w2 0.3 w1 0.2 w2 0 10 20 30 40 0.25 w1 0 10 20 30 40 0 0.2 0 10 20 30 40 0.15 k2 k1 0.1 10 20 30 40 Sharing Paradigm Train: {x1,x2} -> 1 Test: x1 -> ?, x2 -> ? Result: x1->0.5, x2->0.5 Learning with Kalman gain LMS

  5. Blocking Kamin (1968) Attention-like processes in classical conditioning. In: Miami symposium on the prediction of behavior: aversive stimulation (ed. MR Jones), pp. 9-33. Univ. of Miami Press. Kamin trained an animal to continuously press a lever to receive food. He then paired a light (conditioned stimulus) and a mild electric shock to the foot of the rat (unconditioned stimulus). In response to the shock, the animal would reduce the lever-press activity. Soon the animal learned that when the light predicted the shock, and therefore reduced lever pressing in response to the light. He then paired the light with a tone when giving the electric shock. After this second stage of training, he observed than when the tone was given alone, the animal did not reduce its lever pressing. The animal had not learned anything about the tone.

  6. 1 * y x2 x1 0 0 10 20 30 40 0.5 0.4 0.3 0.2 0.1 P22 P11 1.5 1.25 10 20 30 40 1.5 1 1 0.75 0.6 1 0.5 0.5 0.5 yhat y 0.25 0.4 0.5 0 0 w2 10 20 30 40 0.3 yhat w1 y -0.25 0 0.2 0 10 20 30 40 10 20 30 40 1.2 0.1 k2 k1 1 0 0.8 10 20 30 40 0.6 0.4 0.2 w2 0 w1 -0.2 0 10 20 30 40 Blocking Paradigm Train: x1 -> 1, {x1,x2} -> 1 Test: x2 -> ?, x1 -> ? Result: x2 -> 0, x1 -> 1 Learning with Kalman gain LMS

  7. 1 y * x2 x1 0 0 10 20 30 40 50 60 0.5 1.5 0.4 1.5 0.3 1 0.2 1 0.5 yhat 0.1 P22 y P11 0.5 0 yhat 0 10 20 30 40 50 60 0 10 20 30 40 50 60 y 0 0 10 20 30 40 50 60 0.4 1 0.8 0.2 1 0.6 0.8 0 0.4 0.2 0.6 -0.2 0 w2 k2 0.4 w1 k1 -0.2 -0.4 0.2 0 10 20 30 40 50 60 0 10 20 30 40 50 60 w2 w1 0 0 10 20 30 40 50 60 Backwards Blocking Paradigm Train: {x1,x2} -> 1, x1 -> 1 Test: x2 -> ? Result: x2 -> 0 Learning with Kalman gain LMS

  8. Different output models Suppose that x represents inputs from the environment: a light and a tone. Suppose that y represents a reward, like a food pellet. Case 1: the animal assumes an additive model. If each stimulus predicts one reward, then if the two are present together, they predict two rewards. Case 2: the animal assumes a weighted average model. If each stimulus predicts one reward, then if the two are present together, they still predict one reward, but with higher confidence. The weights a1 and a2 should be set to the inverse of the variance (uncertainty) with which each stimulus x1 and x2 predicts the reward.

  9. General case of the Kalman filter nx1 mx1 A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

  10. Application of Kalman filter to problems in sensorimotor control Motor command State of our body Sensory measurement DM Wolpert et al. (1995) Science 269:1880

  11. When we move our arm in darkness, we may estimate the position of our hand based on three sources of information: • proprioceptive feedback. • a forward model of how the motor commands have moved our arm. • by combining our prediction from the forward model with actual proprioceptive feedback. • Experimental procedures: • Subject holds a robotic arm in total darkness. The hand is briefly illuminated. An arrow is displayed to left or right, showing which way to move the hand. In some cases, the robot produces a constant force that assists or resists the movement. The subject slowly moves the hand until a tone is sounded. They use the other hand to move a mouse cursor to show where they think their hand is located. DM Wolpert et al. (1995) Science 269:1880

  12. DM Wolpert et al. (1995) Science 269:1880

  13. The generative model, describing actual dynamics of the limb Motor command B A State of the body C Sensory measurement The model for estimation of sensory state from sensory feedback For whatever reason, the brain has an incorrect model of the arm. It overestimates the effect of motor commands on changes in limb position.

  14. Initial conditions: the subject can see the hand and has no uncertainty regarding its position and velocity Forward model of state change and feedback Actual observation Estimate of state incorporates the prior and the observation Forward model to establish the prior and the uncertainty for the next state

  15. 1.8 1.6 1.4 1.2 1 0.8 0 0.5 1 1.5 2 A single movement 20 15 Pos (cm) 10 Actual and estimated position 5 For movements of various length 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.5 1 Motor command u Variance at end of movement (cm^2) 0.5 0 Time of “beep” -0.5 -1 -1.5 Bias at end of movement (cm) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0.014 0.012 Total movement time (sec) 0.01 0.008 0.006 Kalman gain 0.004 0.002 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 Time sec

  16. Puzzling results: Savings and memory despite “washout” Gain=eye displacement divided by target displacement 1 Result 1: After changes in gain, monkeys exhibit recall despite behavioral evidence for washout. Kojima et al. (2004) Memory of learning facilitates saccade adaptation in the monkey. J Neurosci 24:7531.

  17. Puzzling results: Improvements in performance without error feedback Result 2: Following changes in gain and a long period of washout, monkeys exhibit no recall. Result 3: Following changes in gain and a period of darkness, monkeys exhibit a “jump” in memory. Kojima et al. (2004) J Neurosci 24:7531.

  18. The learner’s hypothesis about the structure of the world • 1. The world has many hidden states. What I observe is a linear combination of these states. • The hidden states change from trial to trial. Some change slowly, others change fast. • The states that change fast have larger noise than states that change slow. A slow system fast system state transition equation output equation

  19. 0.6 1.25 1.5 1 0.4 1 0.75 0.2 0.5 0.5 0.25 0 0 0 -0.2 -0.5 -0.25 k2 w2 k1 w1 -0.5 -1 yhat -0.4 y 0 50 100 150 200 250 300 0 50 100 150 200 250 300 -1.5 0 50 100 150 200 250 300 1 Simulations for savings * y 0 x2 x1 -1 0 50 100 150 200 250 300 The critical assumption is that in the fast system, there is much more noise than in the slow system. This produces larger learning rate in the slow system.

  20. 1 0.5 0 -0.5 * y x2 x1 -1 0 50 100 150 200 250 300 1.5 1 0.5 0 -0.5 -1 yhat y -1.5 0 50 100 150 200 250 300 0.4 0.2 0 -0.2 w2 w1 -0.4 0 50 100 150 200 250 300 Simulations for spontaneous recovery despite zero error feedback error clamp In the error clamp period, estimates are made yet the weight update equation does not see any error. Therefore, the effect of Kalman gain in the error-clamp period is zero. Nevertheless, weights continue to change because of the state update equations. The fast weights rapidly rebound to zero, while the slow weights slowly decline. The sum of these two changes produces a “spontaneous recovery” after washout.

  21. Changes in representation without error feedback Target visible during recovery Target extinguished during recovery Mean gain at start of recovery = 0.83 Mean gain at end of recovery = 0.95 Mean gain at start of recovery = 0.86 Mean gain at end of recovery = 0.87 % gain change = 14.4% % gain change = 1.2% Seeberger et al. (2002) Brain Research 956:374-379.

  22. Massed vs. Spaced training: effect of changing the inter-trial interval Learning reaching in a force field ITI = 8min Han, J.S., Gallagher, M. & Holland, P. Hippocampus 8:138-46 (1998) Discrimination performance (sec) ITI = 1min Rats were trained on an operant conditional discrimination in which an ambiguous stimulus (X) indicated both the occasions on which responding in the presence of a second cue (A) would be reinforced and the occasions on which responding in the presence of a third cue (B) would not be reinforced (X --> A+, A-, X --> B-, B+). Both rats with lesions of the hippocampus and control rats learned this discrimination more rapidly when the training trials were widely spaced (intertrial interval of 8 min) than when they were massed (intertrial interval of 1 min). With spaced practice, lesioned and control rats learned this discrimination equally well. But when the training trials were massed, lesioned rats learned more rapidly than controls.

  23. Cue-dependent saccade gain adaptation When eyes are looking up, increase saccade gain, when eyes are looking down, decrease gain. Performance in a water maze Escape latency (s) 16 trials in one day 4 trials per day for 4 days Training trial (bin size=4) (break period: 1 min) Commins, S., Cunningham, L., Harvey, D. & Walsh, D. (2003) Behav Brain Res 139:215-23 Aboukhalil, A., Shelhamer, M. & Clendaniel, R. (2004) Neurosci Lett 369:162-7.

  24. The learner’s hypothesis about the structure of the world • 1. The world has many hidden states. What I observe is a linear combination of these states. • The hidden states change from trial to trial. Some change slowly, others change fast. • The states that change fast have larger noise than states that change slow. • The state changes can occur more frequently than I can make observations. A A A A A Inter-trial interval

  25. ITI=2 ITI=20 1.5 1.5 1 1 0.5 0.5 y y yhat 0 0 yhat 0 50 100 150 200 250 300 0 500 1000 1500 2000 2500 3000

  26. 0.118 0.0136 0.116 0.0134 0.114 0.0132 0.112 0.013 0.11 0.108 0.0128 P22 P11 0.106 0.0126 1000 1020 1040 1060 1080 1100 1000 1020 1040 1060 1080 1100 When there is an observation, the uncertainty for each hidden variable decreases proportional to its Kalman gain. When there are no observations, the uncertainty decreases in proportion to A squared, but increases in proportion to state noise Q. ITI=20 Uncertainty for the slow state Uncertainty for the fast state Beyond a minimum ITI, increased ITI continues to increase the uncertainty of the slow state but has little effect on the fast state uncertainty. The longer ITI increases the total learning by increasing the slow state’s sensitivity to error.

  27. 1.5 0.8 w1spaced 1 0.6 w1massed 0.4 0.5 w2massed y 0.2 yhatspaced 0 0 yhatmassed w2spaced -0.2 0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140 0.3 0.8 k2massed 0.25 k2spaced 0.6 0.2 0.4 0.15 P22spaced 0.2 k1spaced 0.1 P22massed k1massed 0 20 40 60 80 100 120 140 0.05 P11spaced 0 P11massed P12spaced -0.05 P12massed 0 20 40 60 80 100 120 140 Performance in spaced training depends largely on the slow state. Therefore, spaced training produces memories that decay little with passage of time. Observation number

  28. ITI=2 ITI=14 ITI=98 Test at 1 week ITI=98 ITI=14 ITI=2 Spaced training results in better retention in learning a second language On Day 1, subjects learned to translate written Japanese words into English. They were given a Japanese word (written phonetically), and then given the English translation. This “study trial” was repeated twice. Afterwards, the were given the Japanese word and had to write the translation. If their translation was incorrect, the correct translation was given. The ITI between word repetition was either 2, 14, or 98 trials. Performance during training was better when the ITI was short. However, retention was much better for words that were observed with longer ITI. (The retention test involved two groups; one at 1 day and other at 7 days. Performance was slightly better for the 1 day group but the results were averaged in this figure.) Performance during training Testing at 1 day or 1 week (averaged together) Pavlik, P. I. and Anderson, J. R. ( 2005). Practice and forgetting effects on vocabulary memory: An activation-based model of the spacing effect. Cognitive Science, 29, 559-586.

  29. Data fusion Suppose that we have two sensors that independently measure something. We would like to combine their measures to form a better estimate. What should the weights be? Suppose that we know that sensor 1 gives us measurement y1 and has Gaussian noise with variance: And similarly, sensor 2 has gives us measurement y2 and has Gaussian noise with variance: A good idea is to weight each sensor inversely proportional to its noise:

  30. Data fusion via Kalman filter To see why this makes sense, let’s put forth a generative model that describes our hypothesis about how the data that we are observing is generated: Hidden variable Observed variables

  31. priors our first observation variance of our posterior estimate See homework for this

  32. The real world What our sensors tell us Notice that after we make our first observation, the variance of our posterior is better than the variance of either sensor.

  33. 0.5 0.4 0.3 0.2 0.1 0 -2 0 2 4 6 8 10 0.4 0.3 0.2 0.1 0 -2.5 0 2.5 5 7.5 10 12.5 15 Combining equally noisy sensors Combining sensors with unequal noise Sensor 1 Sensor 1 Sensor 2 Combined Combined Sensor 2 probability Mean of the posterior, and its variance

  34. Belief Integration What we sense depends on what we predicted State change force Body part Motor commands muscles Sensory system Proprioception Vision Audition Measured sensory consequences Predicted sensory consequences Forward model

  35. The brain predicts the sensory consequences of motor commands Duhamel et al. Science 255, 90-92 (1992)

  36. Combining sensory predictions with sensory measurements should produce a better spatial estimate of the visual world Vaziri, Diedrichsen, Shadmehr (2006) Journal of Neuroscience

  37. Vaziri et al. (2006) J Neurosci

  38. How to set the initial var-cov matrix In homework, we will show that in general: Now if we have absolutely no prior information on w, then before we see the first data point P(1|0) is infinity, and therefore its inverse in zero. After we see the first data point, we will be using the above equation to update our estimate. The updated estimate will become: A reasonable and conservative estimate of the initial value of P would be to set it to the above value. That is, set:

More Related