1 / 34

Slide 1

Lecture 2-3-4 ASSOCIATIONS, RULES, AND MACHINES CONCEPT OF AN E-MACHINE: simulating symbolic read/write memory by changing dynamical attributes of data in a long-term memory Victor Eliashberg Consulting professor, Stanford University, Department of Electrical Engineering. Slide 1.

cerise
Télécharger la présentation

Slide 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 2-3-4 ASSOCIATIONS, RULES, AND MACHINES CONCEPT OF AN E-MACHINE: simulating symbolic read/write memory by changing dynamical attributes of data in a long-term memory Victor Eliashberg Consulting professor, Stanford University, Department of Electrical Engineering Slide 1

  2. SCIENTIFIC / EGINEERING APPROACH External system (W,D) Computing system, B, simulating the work of human nervous system Sensorimotor devices, D B D W Human-like robot (D,B) External world, W “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.” (Sherlock Holmes) Slide 2

  3. ZERO-APPROXIMATION MODEL s(ν) s(ν+1) Slide 3

  4. BIOLOGICAL INTERPRETATION Working memory, episodic memory, and mental imagery Motor control AM AS Slide 4

  5. y X X11 X12 AM sel y NM.y 0 1 NM symbol read move type symbol Teacher current state of mind next state of mind PROBLEM 1:LEARNING TO SIMULATE the Teacher This problem issimple: system AM needs to learn a manageable number of fixed rules. Slide 5

  6. PROBLEM 2: LEARNING TO SIMULATE EXTERNAL SYSTEMThis problem ishard: the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n. y 1 2 NS NOTE. System (W,D) shown in slide 3 has the properties of a random access memory (RAM). Slide 6

  7. Programmable logic array (PLA): a logic implementation of a local associative memory (solves problem 1 from slide 5) Slide 7

  8. BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS Slide 8

  9. Typical neuron Neuron is a very specialized cell. There are several types of neurons with different shapes and different types of membrane proteins. Biological neuron is a complex functional unit. However, it is helpful to start with a simple artificial neuron (next slide). Slide 9

  10. Neuron as the first-order linear threshold element: Inputs:xk R’ Parameters: g1,… gm R’ R’is the set of real non-negativenumbers Output: yR’ xk xm Equations: x1 gk m du g1 Σgkxk gm τ + u = (1) dt k=1 u y=L( u ) (2) where, { u if u > 0 (3) y L( u) = 0 otherwise A more convenient notation xk is the k-th component of input vector g1 x1 gk is the gain (weight) of the k-th synapse gk xk m gm y=L( u ) xm Σgkxk is the total postsynaptic current s = s k=1 τ u is the postsynaptic potential u y is the neuron output u 0 τis the time constant of the neuron y Slide 10

  11. Input synaptic matrix, input long-term memory (ILTM) and DECODING ILTM gx1k gxnk gxik x1 xk DECODING (computing similarity) x xm s1 si sn si sn s1 An abstract representation of (1): m Σgxikxk (2) fdec: X × Gx S si = (1) i=1,…n k=1 Notation: x=(x1, .. xm)are thesignals from input neurons (not shown) gx = (gxik) i=1,…n, k=1,…m is the matrix of synaptic gains -- we postulate that this matrix represents input long-term memory (ILTM) s=(s1, .. sn)is thesimilarity function Slide 11

  12. Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice s1 si sn xinh q α α α ui un u1 Equations: τ τ τ (1) β β β dn di d1 (2) Note. Small white and black circles represent excitatory andinhibitorysynapses, respectively. (3) s1 sn si Procedural representation: RANDOM CHOICE iwin : { i / si=max sj > 0 } (4) ( j ) if (i == iwin) di=1; else di=0; (5) iwin “: “ denotes random equally probable choice Slide 12

  13. Output synaptic matrix, output long-term memory (OLTM) and ENCODING di dn d1 dn d1 di y1 y ENCODING (data retrieval) yk gyki gykn gyk1 yp OLTM An abstract representation of (1): n Σgykidi yk = (2) fenc: D × Gy Y (1) k=1,…p i=1 NOTATION: d=(d1, .. dm)signals from the WTA layer (see previous slide) gy = (gyki) i=1,…n, k=1,…m is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM) y=(y1, .. yp) output vector Slide 13

  14. A neural implementation of a local associative memory (solves problem 1 from slide 5)(WTA.EXE) addressing by content DECODING S21(I,j) S21(i,j) Input long-term memory (ILTM) N1(j) RANDOM CHOICE Output long-term memory (OLTM) ENCODING retrieval Slide 14

  15. A functional model of the previous network[7],[8],[11] (WTA.EXE) (1) (2) (3) (4) (5) Slide 15

  16. HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6? Slide 16

  17. External system as a generalized RAM Slide 17

  18. Concept of a generalized RAM (GRAM) Slide 18

  19. Slide 18 Slide 19

  20. Representation of local associative memory in terms of three “one-step” procedures: DECODING, CHOICE, ENCODING Slide 20

  21. INTERPRETATION PROCEDURE Slide 21

  22. At the stage of training, sel=1; at the stage of examination sel=0. System AS simply “tape-records” its experience,(x1,x2,xy)(0:ν). y 1 2 NS GRAM NOTE. System (W,D) shown in slide 3 has the properties of a random access memory (RAM). Slide 22

  23. EXPERIMENT 1: Fixed rules and variable rules Slide 23

  24. EXPERIMENT 1 (continued 1) Slide 24

  25. EXPERIMENT 1 (continued 2) Slide 25

  26. A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2, but this solution can be easily falsified! Slide 26

  27. GRAM as a state machine: combinatorial explosion of the number of fixed rules Slide 27

  28. Concept of a primitive E-machine Slide 28

  29. s(i) > c ; Slide 29 (α< .5)

  30. Effect of a RAM w/o a RAM buffer G-state E-state 1 2 3 4 2 1 2 3 4 1 3 4 c c c c c c c c c c c c b b b b b b b b b b b b a a a a a a a a a a a a 1 2 3 4 1 2 3 4 1 2 3 4 a c b a c b a c b a c b 3 4 2 1 a b c a b c a b c a b c Slide 30

  31. EFFECT OF“MANY MACHINES IN ONE” 2 6 1 3 4 5 7 8 n=8 locations of LTM X(1) 0 0 0 0 1 1 1 1 G-state X(2) 0 0 1 1 0 0 1 1 y(1) 0 1 0 1 0 1 0 1 E-state AND m+1 A table with n=2 OR m 2 N=2 represents XOR different m-input 1-output Boolean functions. Let m=10. Then n=2048 and N=2 NAND 1024 NOR Slide 31

  32. Simulation of GRAM with A={1,2}, and D={a,b,ε} i 2 6 1 3 4 5 7 addr 1 1 2 2 1 2 din a b b a dout a b b a b a s (i) is the number of matches in the first two rows. Input(addr,din) = (1,ε) producess(i)=1fori=1andi=2. ν= 5 s(i) if ( s(i)>e(i) ) e(i)(ν+1) = s(i)(ν); else e(i)(ν+1) = c · e(i)(ν) ; τ=1/(1-c) e(i) se(i) = s(i) · ( 1+a ·e(i) ); (a<.5) se(i) dout=bis read fromi=2that hasse(i)=max(se) Slide 32

  33. iwin : {i : se(i)=max(se)>0}; Assume that the E-machine starts with the state of LTM shown in the table and doesn’t learn more, so this state remains the same. What changes is the E-state, e(1),…e(4). Assume that at ν=1, e(1)=..e(4)=0. Let us send the input sequence(addr,din)(1:5) = (1,a), (1,b),(2,a),(2,b),(1,ε). As can be verified, atν= 5, the state e(i) and functions s(i) and se(i) for i=1,..4 are as shown below. Accordingly, iwin=2 and dout=b. s (i) is the number of matches in the first two rows. Input(addr,din) = (1,ε) producess(i)=1fori=1andi=2. i 1 3 4 2 gx(1,1:4) addr 1 1 2 2 din gx(2,1:4) a b b a if ( s(i)>e(i) ) e(i)(ν+1) = s(i)(ν); else e(i)(ν+1) = c · e(i)(ν) ; τ=1/(1-c) dout gy(1,1:4) a b b a ν= 5 se(i) = s(i) · ( 1+a ·e(i) ); (a<.5) s(i) e(i) y =gy(iwin); (a<.5) se(i) Slide 33

  34. What can be efficiently computed in this “nonclassical” symbolic/dynamical computational paradigm (call it the E-machine paradigm)? What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm? How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models? Slide 34

More Related