1 / 13

Assignment #3 Question 2

Assignment #3 Question 2. Regarding your cascade correlation projects, here are a few tips to make your life easier.

lblaise
Télécharger la présentation

Assignment #3 Question 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assignment #3 Question 2 • Regarding your cascade correlation projects, here are a few tips to make your life easier. • First of all, the book suggests that after adding a new hidden-layer unit and training its weights, in the output layer we only need to train the weights of the newly added connections (or – your instructor’s idea - just use linear regression to determine them). • While that is a very efficient solution, the original paper on cascade correlation suggests to always retrain all output layer weights after adding a hidden-layer unit. Neural Networks Lecture 18: Applications of SOMs

  2. Assignment #3 Question 2 • This will require more training, but it may find a better (=lower error) overall solution for the weight vectors. • Furthermore, it will be easier for you to use the same training procedure over and over again instead of writing a single-weight updating function or a linear regression function. • For this output weight training, you can simply use your backpropagation algorithm and remove the hidden-layer training. • The cascade correlation authors suggest Quickprop for speedup, but Rprop also works. Neural Networks Lecture 18: Applications of SOMs

  3. Assignment #3 Question 2 • In order to train the weights of a new hidden-layer unit, you need to know the current error for each output neuron and each exemplar. • You can compute these values once and store them in an array. • After creating a new hidden unit with random weights and before training it, determine the current sign Sk of the covariance between the unit’s output and the error in output unit k (do not update Skduring training, it can lead to convergence problems). Neural Networks Lecture 18: Applications of SOMs

  4. Assignment #3 Question 2 • For the hidden-layer training, you can also use Quickprop or Rprop. • Once a new hidden-layer unit has been installed and trained, its weights and thus its output for a given network input will never change. • Therefore, you can store the outputs of all hidden units in arrays and use these stored data for the remainder of the network buildup/training. • No optimizations are required for this question (sorry, no prizes here), but it is interesting to try it anyway. Neural Networks Lecture 18: Applications of SOMs

  5. Self-Organizing Maps (Kohonen Maps) output vector o • Network structure: … O1 O2 O3 Om … x1 x2 xn input vector x Neural Networks Lecture 18: Applications of SOMs

  6. Self-Organizing Maps (Kohonen Maps) Neural Networks Lecture 18: Applications of SOMs

  7. Unsupervised Learning in SOMs In the textbook, a different kind of neighborhood function is used. Instead of having a smooth, continuous function (i, k) to indicate connection strength, a neighborhood boundary is defined. All neurons within the neighborhood of the winner unit adapt their weights to the current input by exactly the same proportion . The size of the neighborhood is decreased over time. Neural Networks Lecture 18: Applications of SOMs

  8. Unsupervised Learning in SOMs N.hood for 0  t < 10 N.hood for 10  t < 20 N.hood for 20  t < 30 N.hood for 30  t < 40 N.hood for t > 39 Neural Networks Lecture 18: Applications of SOMs

  9. 0 20 100 1000 10000 25000 Unsupervised Learning in SOMs Example I: Learning a one-dimensional representation of a two-dimensional (triangular) input space: Neural Networks Lecture 18: Applications of SOMs

  10. Unsupervised Learning in SOMs Example II: Learning a two-dimensional representation of a two-dimensional (square) input space: Neural Networks Lecture 18: Applications of SOMs

  11. Unsupervised Learning in SOMs Example III:Learning a two-dimensional mapping of texture images Neural Networks Lecture 18: Applications of SOMs

  12. Unsupervised Learning in SOMs Examples IV and V:Learning two-dimensional mappings of RGB colors and NFL images: http://www.shy.am/2005/12/kohonen-self-organizing-map-demos/ Example VI: Interactive SOM learning of two- and three-dimensional shapes: http://www.cs.umb.edu/~marc/cs672/wsom.exe Neural Networks Lecture 18: Applications of SOMs

  13. Unsupervised Learning in SOMs Example VII:A Self-organizing Semantic Map for Information Retrieval (Xia Lin, DagobertSoergel, Gary Marchionini) http://www.cs.umb.edu/~marc/cs672/lin1991.pdf Neural Networks Lecture 18: Applications of SOMs

More Related