1 / 19

Reading population codes: a neural implementation of ideal observers

Reading population codes: a neural implementation of ideal observers. Sophie Deneve, Peter Latham, and Alexandre Pouget. encode. Stimulus (s). neurons. Response (r). decode. Tuning curves. sensory and motor info often encoded in “tuning curves”

hong
Télécharger la présentation

Reading population codes: a neural implementation of ideal observers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reading population codes: a neural implementation of ideal observers Sophie Deneve, Peter Latham, and Alexandre Pouget

  2. encode Stimulus (s) neurons Response (r) decode

  3. Tuning curves • sensory and motor info often encoded in “tuning curves” • neurons give a characteristic “bell shaped” response

  4. Difficulty of decoding • noisy neurons create variable responses to same stimuli • brain must estimate encoded variables from the “noisy hill” of a population response

  5. Population vector estimator • assign each neuron a vector • vector length is proportional to activity • vector direction corresponds to preferred direction Sum vectors

  6. Population vector estimator • Vector summation is equivalent to fitting a cosine function • peak of cosine is estimate of direction

  7. How good is an estimator? • need to compare variance of estimator after repeated presentations to a lower bound • the maximum likelihood estimate gives the lower variance bound for a given amount of independent noise VS

  8. encode Stimulus (s) neurons Response (r) decode

  9. Maximum Likelihood Decoding Maximum likelihood estimator Decoding Encoding

  10. Goal: biological ML estimator • recurrent neural network with broadly tuned units • can achieve ML estimate with noise independent of firing rate • can approximate ML estimate with activity-dependent noise

  11. General Architecture Pλ Preferred Frequency • units are fully connected and are arranged in frequency columns and orientation rows • weights implement a 2-D Gaussian filter: 20 Preferred orientation PΘ 20

  12. Input tuning curves • circular normal functions with some spontaneous activity: • Gaussian noise is added to inputs:

  13. Unit updates & normalization • units are convolved with filter (local excitation) • responses are normalized divisively (global inhibition)

  14. Results • Rapidly converges • strongly dependent on contrast

  15. Results • sigmoidal response curve after 3 iterations, becomes a step after 20 • actual neuron

  16. Noise Effects Flat Noise • Width of input tuning curve held constant • width of output tuning curve varied by adjusting spatial extent of the weights Proportional Noise

  17. Analysis Flat Noise Q1: Why does the optimal width depend on noise? Q2: Why does the network perform better for flat noise? Proportional Noise

  18. Analysis Smallest achievable variance: = inverse of the covariance matrix of the noise Θ = vector of the derivative of the input tuning curve with respect to For Gaussian noise: Trace term is 0 when R is independent of Θ (flat noise)

  19. Summary • network gives a good approximation of the optimal tuning curve determined by ML • type of noise (flat vs proportional) affected variance and optimal tuning width

More Related