1 / 18

Peter Latham* and Sheila Nirenberg † *Gatsby Computational Neuroscience Unit

Synergy, redundancy, and independence in population codes, revisited. or Are correlations important?. Peter Latham* and Sheila Nirenberg † *Gatsby Computational Neuroscience Unit † University of California, Los Angeles. The neural coding problem. Estimate stimulus from responses:. s.

teva
Télécharger la présentation

Peter Latham* and Sheila Nirenberg † *Gatsby Computational Neuroscience Unit

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Synergy, redundancy, and independencein population codes, revisited.orAre correlations important? Peter Latham* and Sheila Nirenberg† *Gatsby Computational Neuroscience Unit †University of California, Los Angeles

  2. The neural coding problem Estimate stimulus from responses: s r1, r2, ..., rn ? Approach problem probabilistically: P(r1, r2, ..., rn|s) P(s|r1, r2, ..., rn) Bayes P(r1, r2, ..., rn|s) P(s) P(r1, r2, ..., rn)

  3. The problem is easy for one neuron (1-D) but harder for populations (≥ 2-D). Why? Because correlations force you to measure the probability in every bin. easy in 1-D harder in 2-D impossible in high-D. “high” ~ 3. P(r|s) P(r1 ,r2 |s) 0 10 20 r1 r2 response (r) P(r1 |s)P(r2 |s) r1 r2 Note: this problem disappears when the responses are uncorrelated.

  4. ● If you want to understand how to decode spike trains, you have to figure out how to deal with correlations. ● The first step is to understand whether you need to deal with correlations. ● In other words, are correlations important?

  5. How to determine if correlations are important: • Get rid of them by treating the cells as though they • were independent and then estimate the stimulus. • If your estimate of the stimulus is different from the • true estimate, then correlations are important. • Otherwise they are not. • Formally, compare Pind(s|r1, r2) to P(s|r1, r2), where ∑P(r1|s')P(r2|s')P(s') s' Independent response distribution P(r1|s)P(r2|s)P(s) Pind(s|r1, r2) = If Pind(s|r1, r2) ≠P(s|r1, r2), correlations are important for decoding. If Pind(s|r1, r2) =P(s|r1, r2), correlations are not important.

  6. One might wonder: how can Pind(s|r1, r2) =P(s|r1, r2) when neurons are correlated – i.e., when P(r1|s)P(r2|s) ≠ P(r1, r2|s)? P(r1, r2|s) P(r1|s)P(r2|s) r2 r2 s4 s4 s1 s3 s1 s3 s2 s2 r1 r1 Neurons are correlated, that is, P(r1|s)P(r2|s) ≠P(r1, r2|s), but correlations don’t matter: Pind(s|r1, r2) = P(s|r1, r2).

  7. Intuitively, the closer Pind(s|r1, r2) is toP(s|r1, r2), the less important correlations are. We measure “close” using P(s|r1, r2) DI = ∑ P(r1, r2)P(s|r1, r2)log Pind(s|r1, r2) r1,r2,s = 0 if and only if Pind(s|r1, r2) = P(s|r1, r2) = penalty in yes/no questions for ignoring correlations = upper bound on information loss. If DI/I is small, then you don’t lose much information if you treat the cells as independent.

  8. Quantifying information loss Information is the log of the number of messages that can be transmitted over a noisy channel with vanishingly small probability of error. An example: neurons coding for orientation.

  9. ^ You know: Pind(θ|r1, r2). You build: θind(r1, r2) = suboptimal estimator ^ You know: P(θ|r1, r2). You build: θ(r1, r2) = optimal estimator. Δθ distribution of θgivenθ1 was presented. distribution of θgivenθ2 was presented. 180 ^ ^ I≈ log Δθ θ1 θ2 Δθind distribution of θindgivenθ1 was presented. distribution of θindgivenθ2 was presented. 180 ^ ^ Iind≈ log Δθind θ1 θ2 Information loss: I - Iind

  10. if Δθ is large: ●Show multiple trials. ● Stimuli appear in only two possible orders. θ1presented order #1 order #2 1 1 θ2presented trial trial 20 20 ^ ^ θ θ log of the number of distinct orders log 2 = I≈ n 20 rigorous as n→ ∞ !!!!

  11. Formal analysis: the general case true distribution: p(r|s) approximate distribution: q(r|s) how many code words (a.k.a. orders) can you transmit using each? c(1) = s1(1) s2(1) s3(1) ... sn(1) code words (different ordering of stimuli) c(2) = s1(2) s2(2) s3(2) ... sn(2) ... c(w) = s1(w) s2(w) s3(w) ... sn(w) ... trials ● Observe r1r2r3 ... rn; guess code word (guess w). ● More code words = mistakes are more likely. ● You can transmit more code words without mistakes if use p to decode than if you use q. ● The difference tells you how much information you lose by using q rather than p.

  12. true probabilty: p(w|r1, r2, r3, ..., rn) ~ ∏ip(ri|si(w)) p(w) approx. probability: q(w|r1, r2, r3, ..., rn) ~ ∏iq(ri|si(w)) p(w) decode:w = arg max ∏ip(ri|si(w)) or ∏iq(ri|si(w)) want: ∏ip(ri|si(w*)) > ∏ip(ri|si(w)) w≠w*true code word prob. error:Pe[p,w]= prob{∏i p(ri| si(w)) > ∏ip(ri|si(w*))} Pe[q,w]= prob{∏i q(ri| si(w)) > ∏iq(ri|si(w*))} number of code words that can be transmitted with vanishingly small probability of error ~ 1/Pe . I[p] = log(1/Pe[p,w])/n I[q] = log(1/Pe[q,w])/n Information loss = I[p] – I[q] ≤ ΔI constant ^ w definition algebra algebra algebra

  13. Other measures DIsynergy= I(r1, r2;s) – I(r1;s) – I(r2;s) Intuition: If responses taken together provide more information than the sum of the individual responses, which can only happen when correlations exist, then correlations “must” be important. Good points:● Cool(ish) name. ● Compelling intuition. Bad points:● Intuition is wrong: DIsynergy can’t tell you whether correlations are important.

  14. Example: A case where and you can decode perfectly, that is, Pind(s|r1, r2) =P(s|r1, r2) for all responses that occur, but, DIsynergy > 0. P(r1, r2|s) P(r1|s)P(r2|s) r2 r2 s2 s3 s2 s3 s1 s3 s1 s3 s1 s2 s1 s2 r1 r1 Schneidman, Bialek and Berry (2003) used this example to argue that DIsynergy is a good measure of whether or not correlations are important. We find this baffling. DIsynergycan be: zero, positive, negative when Pind(s|r1, r2) =P(s|r1, r2) (Nirenberg and Latham, PNAS, 2003).

  15. Information from neurons that saw the same stimulus but at different times (so that correlations are removed). DIshuffled= Itrue – Ishuffled Intuition: 1. Ishuffled > Itrue: Correlations hurt 2. Ishuffled < Itrue: Correlations help 3. Ishuffled = Itrue: Correlations don’t matter Good point:● Can be used to answer high-level questions about neural code (what class of correlations increases information?). Bad points:● Intuition #3 is false; #1 and #2 are not so relevant, as they correspond to cases the brain doesn’t see.

  16. Example: A case where and you can decode perfectly, that is, Pind(s|r1, r2) =P(s|r1, r2) for all responses that occur, but, DIshuffled > 0. P(r1, r2|s) P(r1|s)P(r2|s) r2 r2 s2 s2 s1 s2 s1 s2 s1 s1 r1 r1 I = 1 bit Ishuffled = 3/4 bit DIshuffledcan be: zero, positive, negative when Pind(s|r1, r2) =P(s|r1, r2) (Nirenberg and Latham, PNAS, 2003).

  17. Summary #1 DIshuffled and DIsynergydo not measure the importance of correlations for decoding – they are confounded. DIdoes measure the importance of correlations for decoding: • DI = 0 if and only if Pind(s|r1, r2) = P(s|r1, r2). • DI is an upper bound on information loss.

  18. Summary #2 • Our goal was to answer the question: • Are correlations important for decoding? • 2.We developed a quantitative information-theoretic measure, DI, which is an upper bound on the information loss associated with ignoring correlations. • 3. For pairs of neurons, DI/I is small, < 12%, except in the LGN where it’s 20-40%. • 4. For larger populations, this is still an open question.

More Related