1 / 18

Population coding

Population coding. Population code formulation Methods for decoding: population vector Bayesian inference maximum a posteriori maximum likelihood Fisher information. Cricket cercal cells coding wind velocity. Population vector. RMS error in estimate. Theunissen & Miller, 1991.

cevallos
Télécharger la présentation

Population coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Population coding • Population code formulation • Methods for decoding: • population vector • Bayesian inference • maximum a posteriori • maximum likelihood • Fisher information

  2. Cricket cercal cells coding wind velocity

  3. Population vector RMS error in estimate Theunissen & Miller, 1991

  4. Population coding in M1 Cosine tuning: Pop. vector: For sufficiently large N, is parallel to the direction of arm movement

  5. The population vector is neither general nor optimal. “Optimal”: Bayesian inference and MAP

  6. Bayesian inference By Bayes’ law, Introduce a cost function, L(s,sBayes); minimise mean cost. For least squares, L(s,sBayes) = (s – sBayes)2 ; solution is the conditional mean.

  7. MAP and ML MAP: s* which maximizes p[s|r] ML: s* which maximizes p[r|s] Difference is the role of the prior: differ by factor p[s]/p[r] For cercal data:

  8. Decoding an arbitrary continuous stimulus E.g. Gaussian tuning curves

  9. Need to know full P[r|s] Assume Poisson: Assume independent: Population response of 11 cells with Gaussian tuning curves

  10. Apply ML: maximise P[r|s] with respect to s Set derivative to zero, use sum = constant From Gaussianity of tuning curves, If all s same

  11. Apply MAP: maximise p[s|r] with respect to s Set derivative to zero, use sum = constant From Gaussianity of tuning curves,

  12. Given this data: Prior with mean -2, variance 1 Constant prior MAP:

  13. How good is our estimate? For stimulus s, have estimated sest Bias: Variance: Mean square error: Cramer-Rao bound: Fisher information

  14. Fisher information Alternatively: For the Gaussian tuning curves:

  15. Fisher information for Gaussian tuning curves Quantifies local stimulus discriminability

  16. Do narrow or broad tuning curves produce better encodings? Approximate: Thus,  Narrow tuning curves are better But not in higher dimensions!

  17. Fisher information and discrimination Recall d’ = mean difference/standard deviation Can also decode and discriminate using decoded values. Trying to discriminate s and s+Ds: Difference in estimate is Ds (unbiased) variance in estimate is 1/IF(s). 

  18. Comparison of Fisher information and human discrimination thresholds for orientation tuning Minimum STD of estimate of orientation angle from Cramer-Rao bound data from discrimination thresholds for orientation of objects as a function of size and eccentricity

More Related