1 / 20

Back Propagation Learning Algorithm

Neural Networks. Multi Layer Perceptrons. f (.). f (.). f (.). Back Propagation Learning Algorithm. Forward propagation. Set the weights Calculate output. Backward propagation. Calculate error Calculate gradient vector Update the weights. Neural Networks. Multi Layer Perceptrons.

rob
Télécharger la présentation

Back Propagation Learning Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks Multi Layer Perceptrons f(.) f(.) f(.) Back Propagation Learning Algorithm Forwardpropagation • Set the weights • Calculate output Backwardpropagation • Calculate error • Calculate gradient vector • Update the weights

  2. Neural Networks Multi Layer Perceptrons Learning with Momentum • In an effort to speed up the learning process, a weight update is made to be based on the previous weight update. This is called momentum, because it tends to keep the error rolls down the error surface. • Because many updates to a particular weight are in the same direction, adding momentum will typically result in a speed up of learning time in many applications. • When using momentum, the update rule for the network weights is modified to be: where α(typically 0.05) is the momentum and n is the number of iteration.

  3. Neural Networks Multi Layer Perceptrons Learning with Momentum • The momentum can be seen to be practically increasing the learning rate. • This is in accordance with several heuristics that should be used in neural network design to improve the result: • Each weight should have a local learning rate • Each learning rate should be allowed to vary over time • Consecutive updates with the same sign to a weight should increase the weights’ learning rate • Consecutive updates with alternating sign to a weight should decreases the weights’ learning rate.

  4. Neural Networks Multi Layer Perceptrons Learning with Weighted Momentum • Various efforts have focused on deriving additional forms of momentum. • One such method is to relate the momentum of the previous update and the current calculation of error gradient. • By doing so, the momentum can be preserved when an iteration attempts to update contrary to the most recent updates. • The update rule for the network weights in this case is given by:

  5. Neural Networks Multi Layer Perceptrons Learning with Variable Learning Rate • The basic idea: speed up the convergence by increasing the learning rate on flat surface and decreasing it when the slop increases. • If error increases by more than a predefined percentage θ(i.e. 1-5%) then: • Weight update is discarded • Learning rate is decreased by a factor 0< γ <1, i.e. γ = 0.7 • Set momentum to zero • If error increases by less than θ: • Weight update is accepted • Learning rate is unchanged • If momentum has been set to zero, it is reset to original value • If the error decreases: • Weight update is accepted • Learning rate is increased by some factor β >1, i.e. β = 1.05 • If momentum has been set to zero, it is reset to its original value

  6. Neural Networks MLP for System Modeling f(.) f(.) f(.) Feedforward Network Input Output Neuron Layer Neuron Layer

  7. Neural Networks MLP for System Modeling Feedforward Network

  8. Neural Networks MLP for System Modeling Output Input Neuron Layer Neuron Layer Output Input Neuron Layer Neuron Layer Recurrent Networks External Recurrence Time Delay Element Internal Recurrence Time Delay Element Time Delay Element

  9. Neural Networks MLP for System Modeling Dynamic System Output Input Dynamic System System parameter Input-output data vector

  10. Neural Networks MLP for System Modeling Dynamic Model Output Input Dynamic Model weights bias input-output data vector

  11. Neural Networks MLP for System Modeling . . . . . . . . . . . . Neural Network Dynamic Model Feedforward : system output : model output,estimate of system output

  12. Neural Networks MLP for System Modeling . . . . . . . . . . . . Neural Network Dynamic Model Recurrent

  13. Neural Networks MLP for System Modeling ..... T D L ..... Tapped Delay Line (TDL) Unit 1 Unit 2 Unit 3 Unit n

  14. Neural Networks MLP for System Modeling . . . . . . Implementation Output Input Dynamic System feedforward external recurrence T D L T D L

  15. Neural Networks MLP for System Modeling Example Single Tank System A : cross-sectional area of the tank a : cross-sectional area of the pipe Learning Data Generation Save data to workspace Area of operation

  16. Neural Networks MLP for System Modeling Example Data size : 201 from 200 seconds of simulation 2–2–1 Network Feedforward Network External Recurrent Network

  17. Neural Networks MLP for System Modeling Homework 5 • A neural network with 2 inputs and 2 hidden neurons seems not to be good enough to model the Single Tank System. Now, design a neural network with 4 inputs and 4 hidden neurons to model the system. Use bias in all neurons and take all a = 1. Delta of 2–2–1 network • Be sure to obtain decreasing errors. • Submit the hardcopy and softcopy of the m-file. 4–4–1 Network

  18. Neural Networks MLP for System Modeling Homework 5A (smaller Student-ID) • For 4 students with smaller Student ID, redo Homework 5 with the following changes: • The network inputs are u(k–1), u(k–3), y(k–1), and y(k–4) • The activation functions for the neurons in the hidden layer are: • Be sure to obtain decreasing errors. • Compare the result with the previous result of HW5. • Submit the hardcopy and softcopy of the m-file. 4–4–1 Network

  19. Neural Networks MLP for System Modeling Homework 5B (greater Student-ID) • For 4 students with greater Student ID, redo Homework 5 with the following changes: • The network inputs are u(k–1), y(k–1), y(k–2), and y(k–3) • The activation functions for the neurons in the hidden layer are: • Be sure to obtain decreasing errors. • Compare the result with the previous result of HW5. • Submit the hardcopy and softcopy of the m-file. 4–4–1 Network

More Related