1 / 22

Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007

Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007. Outline. Introduction structures of Neural Network Hopfield LAM BAM Bit-True Arithmetic Training modes for NN hardware Bit-True model of networks Simulation results. Hopfield Network. Single layer

hilda
Télécharger la présentation

Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007

  2. Outline • Introduction structures of Neural Network • Hopfield • LAM • BAM • Bit-True Arithmetic • Training modes for NN hardware • Bit-True model of networks • Simulation results

  3. Hopfield Network • Single layer • Fully connected [1]

  4. Weight calculation Tji = Tij = if i ≠ j; Tii = 0; where api is ith element of pth pattern • Updating Neuron for input neuron U Sj = then uj = [1]

  5. LAM (Linear Associative Memory) Network single-layer feed-forward network recover the output pattern from full or partial information in the input pattern [1]

  6. Weight calculation Wij = (( 2* aim -1)( 2* bjm -1)) where aim is ith element of mth pattern Threshold values are Ti= • Output calculation For input pattern b output pattern is a Ui = then ai =

  7. BAM (Bidirectional Associative Memory) Network bidirectional Two layer with different dimension For each pattern we have pair (a, b) related to each layer

  8. In X to Y pass W In Y to X pass WT • Weight calculation Wij = • Output calculation In forward pass input of jth neuron in layer Y is: Net y(j) = then yj = In backward pass input of jth neuron in layer X is: Net x(j) = then xj = [1]

  9. Bit-True Arithmetic • SUM Inputs are 2’s complementwith length (WL-1) output is 2’s complement with length WL If inputs haven't same sign, based on carry we make Sign-Extension

  10. Bit-True Arithmetic • Multiply Inputs are 2’s complementwith length WL output is 2’s complement with length WL

  11. Training modes for Neural Network hardware • Off-chip learning: training process is performed out of chip with high precision, forward propagation pass in the recall phase is performed on-chip. • Chip-in-the-loop learning: chipisused during training but only in forward propagation. • On-chip learning: training is done entirely on-chip, sensitive to the use of limited precision weights.

  12. Bit-true Model of Hopfield • Part of code in updating neuron sum+=t[j][i] * neuron[i]; // high precision arithmetic // arithmetic with finite word-length b 1 = t[j][i]; a1 = neuron[i]; a = Decimal2TwosComplement(a1, Word-length-1); b = Decimal2TwosComplement(b1, Word-length-1); c = MulBitTruePrecise(a, b, Word-length-1); s = Decimal2TwosComplement(sum, Word-length-1); s1 = SumBitTruePrecise(s, c, Word-length-1); sum = TwosComplement2Decimal(s1 , Word-length);

  13. Bit-true Model of LAM • Part of code that calculate value of output neuron for an input pattern (propagation) RawOutVect[i]+=W[i][j] * inVect[j]; // high precision arithmetic // arithmetic with finite word-length b1= W[i][j]; a1 = inVect[j]; b = Decimal2TwosComplement(b1, Word-length-1); a = Decimal2TwosComplement(a1, Word-length-1); c = mulBitTruePrecise(a, b, Word-length-1) s = Decimal2TwosComplement(RawOutVect[i], Word-length-1); s1 = SumBitTruePrecise(s, c, Word-length-1); RawOutVect[i] = TwosComplement2Decimal(s1 , Word-length

  14. Bit-true Model of BAM //High precision Arithmetic Sum += To->Weight[i][j] * From->Output[j]; //Finit precision Arithmetic a1 = To->Weight[i][j] ; b1 = From->Output[j]; a = Decimal2TwosComplement(a1,Wordlength-1); b = Decimal2TwosComplement(b1,Wordlength-1); c = mulBitTruePrecise(a, b, Wordlength-1); s = Decimal2TwosComplement(Sum,Wordlength-1); s1 = SumBitTruePrecise(s, c, Wordlength-1); Sum = TwosComplement2Decimal(s1 , Wordlength );

  15. Input pattern for train network Input test pattern Output pattern for different Word-Length 4 bit 6 bit 7 bit 8 bit 32 bit 5 bit Simulation result of Hopfield network

  16. Input pattern for train network Input test patterns Output patterns for WL = 5 bit Output patterns for WL = 6 bit Output patterns for WL = 7 bit Output patterns for WL = 32 bit Simulation result of LAM network

  17. input pattern for layer X input pattern for layer Y "TINA “ "6843726" "ANTJE“ "8034673" " LISA " "7260915" input test pattern "TANE " "ANTJE" "RISE "

  18. Output for WL=32 bit TINA -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 LISA -> | LISA -> 7260915 6843726 -> | 6843726 -> TINA 8034673 -> | 8034673 -> ANTJE 7260915 -> | 7260915 -> LISA TANE -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 RISE -> | DIVA -> 6060737 Simulation result of BAM network

  19. Output for WL=2 bit TINA @ -> | TINA @ -> FENHGKO? ANTJE@ -> | &165:? -> _+87&9)@ LISA @ -> | LISA @ -> FENHGKO? 6843726@ -> | 6843726@ -> ^L^;GI 8034673@ -> | 8034673@ -> &165:? 7260915@ -> | 7260915@ -> &165:? TANE @ -> | TANE @ -> FENHGKO? ANTJE@ -> | YNIJE@ -> H0(@=^/5 RISE @ -> | RISE @ -> "#.$6Z7, Simulation result of BAM network

  20. Output for WL=3 bit TINA -> | TINA -> 8034673 ANTJE -> | TINA -> 8034673 LISA -> | TINA -> 8034673 6843726 -> | 6060737 -> DIVA 8034673 -> | 8034673 -> TINA 7260915 -> | 8034673 -> TINA TANE -> | TINA -> 8034673 ANTJE -> | +61>_? -> GOLKIHL? RISE -> | TINA -> 8034673 Simulation result of BAM network

  21. Output for WL=8 bit TINA -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 LISA -> | LISA -> 7260915 6843726 -> | 6843726 -> TINA 8034673 -> | 8034673 -> ANTJE 7260915 -> | 7260915 -> LISA TANE -> | TINA -> 6843726 ANTJE -> | ANTJE -> 8034673 RISE -> | DIVA -> 6060737 Simulation result of BAM network

  22. References [1] A.S. Pandya, “Pattern Recognition with Neural network using C++ ,” , 2nd ed. vol. 3, J. New York: IEEE PRESS. [2] p.Moerland, E. Fiesler “Neural Network Adaptation for Hardware Implementation”, Handbook of Neural Computation. JAN 97

More Related