1 / 11

Self-Organizing Maps (SOMs)

AI: Neural Networks lecture 8 Tony Allen School of Computing & Informatics Nottingham Trent University. Self-Organizing Maps (SOMs). • Winning neuron (orange) found using:. weights of neighbouring neurons (red & purple) updated using:. Recurrent SOM - STORM.

georginad
Télécharger la présentation

Self-Organizing Maps (SOMs)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AI: Neural Networks lecture 8Tony AllenSchool of Computing & InformaticsNottingham Trent University

  2. Self-Organizing Maps (SOMs) • Winning neuron (orange) found using: • weights of neighbouring neurons (red & purple) updated using:

  3. Recurrent SOM - STORM • Recurrency can by built into a SOM by copying all or part of the SOM information from one time step into the input for the next time step. 0000000001        1001010100 Input Context Input vector • In the case of STORM, the context vector represents only the previous winning neuron using an n-bit Graycode coordinate vector

  4. Recurrent SOM Learning Algorithm 1. Initialise weights (random values), set topological neighbourhood and learning rate parameters, clear context vector. 2. For each sequence in training set do steps 3-8 3. For each input vector X in sequence, do steps 4-7 4. For each neuron j, compute Euclidean distance: 5. Find index j such that YJ is a minimum 6. For all units j within a specified neighbourhood of j, and for all i: 7. Copy row & column vector of winning neuron to context vector 8. Clear context vector between sequences. 9. Update learning rate () & reduce topological neighbourhood. 10. Test stopping condition.

  5. STORM: Grammar induction • Simple artificial regular grammar (REBER grammar) • Seven symbols, six states with two recursive states • Example sentence: BTSSXSE

  6. E T S B X S S X E STORM – String memorisation • Storm uses current input and context to memorise strings 1. B T X S E 2. B T S X S E

  7. E E T T S S B B X X S S S S X X E E STORM – Rule construction mechanism • Network then uses similarities in future-context to identify states (functional-equivalence theory). • Binds neurons together into states via temporal Hebbian learning mechanism 1. B T X S E 2. B T S X S E

  8. E T S B X S S X E Results

  9. Recurrent SOM – prediction performance • Network’s ability to learn the grammar was measured using its performance at predicting future symbols in a sequence. Best matching neuron Second best matching neuron context of current winner 1001010100 • Storm predicts the next two symbols by finding the two neurons whose context vector best matches that of the current winning neuron. The input vector of these two matching neurons then represents the predicted next symbols.

  10. Results 10 identical models were trained on separate randomly generated Reber grammar sequences. Two became perfect grammar recognisers being able to correctly predict the next symbol for all training and test sequences. Average post-training recognition rate was 71%

  11. References McQueen, T.A., Hopgood, A.A., Allen, T.J. and Tepper, J.A. "Extracting finite structure from infinite language" Knowledge-Based Systems, 18 (2005) pp 135-141. ISSN: 0950-7051.

More Related