1 / 16

Neural Prediction Challenge

Neural Prediction Challenge. - Gaurav Mishra - Pulkit Agrawal. How do neurons work. Stimuli  Neurons respond (Excite/Inhibit) ‘Electrical Signals’ called Spikes Spikes encode information !! (Many Models). Our Case…. Stimuli  Video of Natural Images

kris
Télécharger la présentation

Neural Prediction Challenge

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Prediction Challenge -Gaurav Mishra -PulkitAgrawal

  2. How do neurons work • Stimuli  Neurons respond (Excite/Inhibit) • ‘Electrical Signals’ called Spikes • Spikes encode information !! (Many Models)

  3. Our Case… • Stimuli  Video of Natural Images • Response of 12 V1 neurons  #Spikes

  4. Our Data… • A movie for each neuron • Frames  Down sampled images (16x16) • Each frame • Image spans twice the receptive field diameter • 16ms in duration • Known: #Spikes/frame • To build a model correlating stimulus and spike count.

  5. Method 1: RNNs • Recursive neural networks • Composed of feed forward and feed back subnets. • Were initially designed for the purpose of controlling nonlinear dynamical systems Source: Hush et al. The Recursive Neural Network and its Application in Control Theory. Computers Elect. Engng.Vol. 19, No. 4, pp. 333-341, 1993

  6. An exemplary recursive multi-layer perceptron:

  7. Echo State Networks • Dynamic neural networks • The hidden layer of ESN consists of a single large reservoir with neurons connected to each other randomly.

  8. Using RNNs • RNNs have been used for inverse modeling purposes • Can handle time series data because of the presence of feed back and delay mechanisms • May need to down sample data even further to keep network size manageable.

  9. Results • Accuracy: • Average: around 50 percent • Best Case: 89 percent • But does percentage error give the correct picture? • No. • The correlation coefficient gives a more accurate picture of the performance of the network.

  10. Results • Average CC: 0.15

  11. STRF • Problem: Stimulus Black Box Spike Rate Concept of ‘Filter’ Space and Time both ! Latency !

  12. Phase Invariance..

  13. The Math of Estimation..

  14. Current Results • Using ADABOOST: • Accuracy: 34 percent • CC: 0.1543 • SRTF • CC: 0.1526

  15. References • Predicting neural responses during natural vision, S. David, J.Gallant, Computation in neural systems 2005 • Wikipedia article on neural coding (http://en.wikipedia.org/wiki/Neural_coding) • Theunissen et al., Estimating spatial temporal receptive field of auditory and visual neurons from their responses to natural stimuli .Network: Comp Neural Systems 12:289–316. • Hush et al. The Recursive Neural Network and its Application in Control Theory. Computers Elect. Engng.Vol. 19, No. 4, pp. 333-341, 1993 • Le Yang, YanboXue. Development of A New Recurrent Neural Network Toolbox (RNN-Tool) (http://soma.mcmaster.ca/~yxue/papers/RMLP_ESN_Report_Yang_Xue.pdf)

More Related