1 / 25

Multi Layer NN and Bit-True Modeling of These Networks SILab presentation Ali Ahmadi September 2007

Multi Layer NN and Bit-True Modeling of These Networks SILab presentation Ali Ahmadi September 2007. Outline. Review structures of Single Layer Neural Networks Introduction to Multi Layer Perceptron (MLP) Neural Network Error Back-Propagation Learning Algorithm

grover
Télécharger la présentation

Multi Layer NN and Bit-True Modeling of These Networks SILab presentation Ali Ahmadi September 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi Layer NN and Bit-True Modeling of These Networks SILab presentation Ali Ahmadi September 2007

  2. Outline • Review structures of Single Layer Neural Networks • Introduction to Multi Layer Perceptron (MLP) Neural Network • Error Back-Propagation Learning Algorithm • MLP Model for XOR function and Digit Recognition System • Bit-True model of networks

  3. Hopfield Network • Single layer • Fully connected [3]

  4. LAM (Linear Associative Memory) Network single-layer feed-forward network recover the output pattern from full or partial information in the input pattern [3]

  5. BAM (Bidirectional Associative Memory) Network bidirectional Two layer with different dimension For each pattern we have pair (a, b) related to each layer

  6. Result of Quantization of Single Layer NN

  7. Why need Multi Layer Neural Networks? • Classify objects by learning nonlinearity • There are many problems for which linear discriminates are insufficient for minimum error

  8. AND and OR problem classification [1]Easily could be implemented with one layer NN

  9. Classification of XOR problem [1]It is nonlinear problem (couldn’t be implemented with single layer NN)

  10. MLP NN Structure

  11. Feed-forward Operation (input-to hidden) In MLPs a single “bias unit” is connected to each unit other than the input units Indexes i for input layerunits, j in the hidden; wji denotes the input-to-hidden layer weights at the hidden unit j. wj0 is for bias unit of neuron j in hidden layer Each hidden unit emits an output that is a nonlinear function of its activation, that is: yj = f(netj)

  12. Feed-forward Operation (hidden-to-output) Each output unit net activation based on the hidden unit signals as: subscript k indexes units in the output layer and nHdenotes the number of hidden units final output of each neuronin output layer zk = f(netk)

  13. Expressive Power of multi-layer Networks • Any continuous function from input to output can be implemented in a three-layer net, given sufficient number of hidden units nH, proper nonlinearities, and weights[2].

  14. Error Back-Propagation Algorithm • Back-propagation is one of the simplest and most general methods for training of multilayer neural networks. • The power of back-propagation is that it enables us to compute an effective error for each hidden unit, and thus derive a learning rule for the input-to-hidden weights. • Our goal now is to set the interconnection weights based on the training patterns and the desired outputs • Slow convergence speed, is Disadvantages of error back-propagation algorithm.

  15. BP Algorithm learning Computation Where tk is desired output and zk is output of kth neuron in output layer of network

  16. BP Algorithm learning Computation (cont) For hidden-to-output weights, wkj Where η is learning rate of hidden-to-output part For input-to-hidden weights Where λ is learning rate of input-to-hidden part

  17. Stopping criterion • change in the criterion function J(w) is smaller than some preset value  • The total training error is the sum over the errors of n individual patterns

  18. BP Algorithm • begin initialize network topology(# hidden units),w, criterion θ, η,m ← 0 do m ← m + 1 xm ← randomly chosen pattern wij ← wij + ηδjxi; wjk ← wjk + ηδk yj until ▼J(w) < θ return w • end

  19. Network have two modes of operation: • Feed-forward The feed-forward operations consists of presenting a pattern to the input units and passing (or feeding) the signals through the network in order to get outputs units • Learning The learning consists of presenting an input pattern and modifying the network parameters (weights) to reduce distances between the computed output and the desired output

  20. Three layer NN for XOR problem trainInputs[0][0] = 1; trainInputs[0][1] = -1; trainInputs[0][2] = 1; //bias trainOutput[0] = 1; trainInputs[1][0] = -1; trainInputs[1][1] = 1; trainInputs[1][2] = 1; //bias trainOutput[1] = 1; trainInputs[2][0] = 1; trainInputs[2][1] = 1; trainInputs[2][2] = 1; //bias trainOutput[2] = -1; trainInputs[3][0] = -1; trainInputs[3][1] = -1; trainInputs[3][2] = 1; //bias trainOutput[3] = -1; Training patterns of network Network Model

  21. The average squared differences between the desired and actual outputs for “XOR” problem Previous work[5] Our results

  22. Three layer MLP for pattern recognition Training patterns of network Network Model

  23. The average squared differences between the desired and actual outputs for “Digit Recognition System”

  24. Conclusion

  25. References [1] S.THEODORIDIS, K.KOUTROUMBAS “Pattern Recognition ,” , 2nd ed. : Elsevier Academic Press, 2003. [2] R.O. Duda, P.Hart, D. Stork “Pattern Classification ,” , 2nd ed. 2000. [3] A.S. Pandya, “Pattern Recognition with Neural network using C++ ,” , 2nd ed. vol. 3, J. New York: IEEE PRESS. [4] F. Köksal, E. Alpaydin, G. Dündar "Weight quantization for multi-layer perceptrons using soft weight sharing ," ICANN 2001: 211-216. [5] J. L. Holt, J.N. Hwang. “Finite error precision analysis of neural network hardware implementation,” IEEE Transactions on Computers, vol. 42, no. 3, pp. 1380-1389, March 1993. [6] p.Moerland, E. Fiesler “Neural Network Adaptation for Hardware Implementation”, Handbook of Neural Computation. JAN 97 [7] M.Negnevitsky, "Multi-Layer Neural Networks with Improved Learning Algorithms" ,Proceedings of the Digital Imaging Computing: Techniques and Applications (DICTA 2005) [8] A. Ahmed and NI. M. Fahmy, IEEE, Fellow"Application of Mullti-layer Neurad Networks tcil Image Compression" 1997 IEEE Intemational Symposium on Circuits and Systems, June 9-12, 1997,Hong Kong

More Related