1 / 12

Artificial Neural Networks ECE.09.454/ECE.09.560 Fall 2010

Artificial Neural Networks ECE.09.454/ECE.09.560 Fall 2010. Lecture 3 September 27, 2010. Shreekanth Mandayam ECE Department Rowan University http://engineering.rowan.edu/~shreek/fall10/ann/. Plan. Multilayer Perceptron Architecture Signal Flow Learning rule - Backpropagation

Télécharger la présentation

Artificial Neural Networks ECE.09.454/ECE.09.560 Fall 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Neural NetworksECE.09.454/ECE.09.560Fall 2010 Lecture 3September 27, 2010 Shreekanth Mandayam ECE Department Rowan University http://engineering.rowan.edu/~shreek/fall10/ann/

  2. Plan • Multilayer Perceptron • Architecture • Signal Flow • Learning rule - Backpropagation • Matlab MLP Demo • Finish Lab Project 1 • Start Lab Project 2

  3. Hidden Layers Input Layer j j Output Layer 1 x1 j j j y1 1 Outputs x2 Inputs j j j y2 1 x3 wlk j j wji wkj Multilayer Perceptron (MLP): Architecture

  4. 1 j(t) 0.5 0 -1 1 t MLP: Characteristics • Neurons possess sigmoidal (logistic) activation functions • Contains one or more “hidden layers” • Trained using the “backpropagation” algorithm • MLP with 1-hidden layer is a “universal approximator”

  5. Artificial Neural Network • Massively parallel distributed processor made up of simple processing units, which can store and retrieve experiential knowledge • The network “learns” from the data presented to it • The “knowledge” is stored in the interconnection weights Adapted from Haykin

  6. Function signal Error signal Forward propagation Backward propagation MLP: Signal Flow j j j • Computations at each node, j • Neuron output, yj • Gradient vector, dE/dwji

  7. k j i Right Left Backpropagation Notation At a node j, dj(n) yj(n) vj(n) wji(n) yi(n) ej(n) j(.) -1

  8. k j i Right Left Backprop. (contd) Notation If node j is a hidden node, dk(n) vk(n) yj(n) yk(n) vj(n) wji(n) wkj(n) yi(n) ek(n) j(.) -1 j(.)

  9. MLP Training k j i Right Left • Forward Pass • Fix wji(n) • Compute yj(n) • Backward Pass • Calculate dj(n) • Update weights wji(n+1) y x k j i Right Left

  10. MLP’s in Matlab http://engineering.rowan.edu/~shreek/fall10/ann/demos/mlp.m

  11. Lab Projects 1 and 2 • http://engineering.rowan.edu/~shreek/fall10/ann/lab1.html • http://engineering.rowan.edu/~shreek/fall10/ann/lab2.html

  12. Summary

More Related