1 / 13

Assignment 3: Hebbian learning

Assignment 3: Hebbian learning. Outline. The assignment requires you to Download the code and data Run the code in its default configuration Experiment with altering parameters of the model Optional (for bonus marks): Modify the model, for example to:

kevina
Télécharger la présentation

Assignment 3: Hebbian learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assignment 3: Hebbian learning

  2. Outline • The assignment requires you to • Download the code and data • Run the code in its default configuration • Experiment with altering parameters of the model • Optional (for bonus marks): • Modify the model, for example to: • Incorporate other forms of synaptic normalization • Incorporate other learning rules (e.g., ICA)

  3. Dataset • I will provide you with 15 natural images of foliage from the McGill Calibrated Image dataset • These images have been prefiltered, using a DOG model of primate LGN selectivity (Hawken & Parker 1991) • Thus the input simulates the feedforward input to V1

  4. Submission Details • You will submit a short lab report on your experiments. • For each experiment, the report will include: • Any code you may have developed • The parameter values you tested • The graphs you produced • The observations you made • The conclusions you drew

  5. Discussion Questions • In particular, I want your reports to answer at least the following questions: • Why does Hebbian learning yield the dominant eigenvector of the stimulus autocorrelation matrix? • What is the Oja rule and why is it needed? • How does adding recurrence create diversity? • What is the effect of varying the learning time constants?

  6. Graphs • The graphs you produce should be as similar as possible to mine. • Make sure everything is intelligible!

  7. Due Date • The report is due Wed Apr 13

  8. Eigenvectors functionlgneig(lgnims,neigs,nit) %Computes and plots first neigseigenimages of LGN inputs to V1 %lgnims = cell array of images representing normalized LGN output %nit = number of image patches on which to base estimate dx=1.5; %pixel size in arcmin. This is arbitrary. v1rad=round(10/dx); %V1 cell radius (pixels) Nu=(2*v1rad+1)^2; %Number of input units nim=length(lgnims); Q=zeros(Nu); fori=1:nit u=im(y-v1rad:y+v1rad,x-v1rad:x+v1rad); u=u(:); Q=Q+u*u'; %Form autocorrelation matrix end Q=Q/Nu; %normalize [v,d]=eigs(Q,neigs); %compute eigenvectors

  9. Output

  10. Hebbian Learning (Feedforward) function hebb(lgnims,nv1cells,nit) %Implements a version of Hebbian learning with the Oja rule, running on simulated LGN %inputs from natural images. %lgnims = cell array of images representing normalized LGN output %nv1cells = number of V1 cells to simulate %nit = number of learning iterations dx=1.5; %pixel size in arcmin. This is arbitrary. v1rad=round(60/dx); %V1 cell radius Nu=(2*v1rad+1)^2; %Number of input units tauw=1e+6; %learning time constant nim=length(lgnims); w=normrnd(0,1/Nu,nv1cells,Nu); %random initial weights fori=1:nit u=im(y-v1rad:y+v1rad,x-v1rad:x+v1rad); u=u(:); %See Dayan Section 8.2 v=w*u; %Output %update feedforward weights using Hebbian learning with Oja rule w=w+(1/tauw)*(v*u'-repmat(v.^2,1,Nu).*w); end

  11. Output

  12. Hebbian Learning (With Recurrence) function hebbfoldiak(lgnims,nv1cells,nit) %Implements a version of Foldiak's 1989 network, running on simulated LGN %inputs from natural images. Incorporates feedforwardHebbian learning and %recurrent inhibitory anti-Hebbian learning. %lgnims = cell array of images representing normalized LGN output %nv1cells = number of V1 cells to simulate %nit = number of learning iterations dx=1.5; %pixel size in arcmin. This is arbitrary. v1rad=round(60/dx); %V1 cell radius (pixels) Nu=(2*v1rad+1)^2; %Number of input units tauw=1e+6; %feedforward learning time constant taum=1e+6; %recurrent learning time constant zdiag=(1-eye(nv1cells)); %All 1s but 0 on the diagonal w=normrnd(0,1/Nu,nv1cells,Nu); %random initial feedforward weights m=zeros(nv1cells); fori=1:nit u=im(y-v1rad:y+v1rad,x-v1rad:x+v1rad); u=u(:); %See Dayan pp 301-302, 309-310 and Foldiak 1989 k=inv(eye(nv1cells)-m); v=k*w*u; %steady-state output for this input %update feedforward weights using Hebbian learning with Oja rule w=w+(1/tauw)*(v*u'-repmat(v.^2,1,Nu).*w); %update inhibitory recurrent weights using anti-Hebbian learning m=min(0,m+zdiag.*((1/taum)*(-v*v'))); end

  13. Output

More Related