1 / 20

Artificial Neural Networks II - Outline

Artificial Neural Networks II - Outline. Cascade Nets and Cascade-Correlation Algorithm Architecture - incremental building of the net Hopfield Networks Recurrent networks, Associative memory Hebb learning rule Energy function and capacity of the Hopfield network Applications

diana-cole
Télécharger la présentation

Artificial Neural Networks II - Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Neural Networks II - Outline • Cascade Nets and Cascade-Correlation Algorithm • Architecture - incremental building of the net • Hopfield Networks • Recurrent networks, Associative memory • Hebb learning rule • Energy function and capacity of the Hopfield network • Applications • Self-Organising Networks • Spatial representation of data used to code the information • Unsupervised learning • Kohonen Self-Organising Maps • Applications J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  2. Cascade Nets and Cascade-Correlation Algorithm • Starts with input- and output-layer of neurons and build a hierarchy of hidden units • Feed-forward network - n input, m output, h hidden units • Perceptrons in the hidden layer are ordered - lateral connections • inputs from the input layer and from all antecedent hidden units • i-th unit has n + (i-1) inputs • Output units are connected to all input and hidden units J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  3. Cascade Nets: Topology output (y) hidden (z) input (x) • Active mode hidden perceptrons: , for i=1…h output units: , for i=1,…,m J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  4. Cascade-Correlation Algorithm • Start with a minimal configuration of the network (h = 0) • Repeat until satisfied • Initialise a set of candidates for a new hidden unit i.e. connect them to the input units • Adapt their weights in order to maximise the correlation between their outputs and the error of the network • Choose the best candidate and connect him to the outputs • Adapt weights of output perceptrons J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  5. Remarks on Cascade-Correlation Algorithm • Greedy learning mechanism • Incremental constructive learning algorithm • easy to learn additional examples • Typically faster than backpropagation • one layer of weights is optimised in each step (linear complexity) • Easy to parallelise the process of maximisation of the correlation J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  6. Associative Memory • Problem: • Store a set of p patterns • When given a new pattern, the network returns one of the stored patterns that most closely resembles the new one • To be insensitive to small errors in the input pattern • Content-addressable memory - an index key for searching the memory is a portion of the searched information • autoassociative - refinement of the input information (B&W picture  colours) • heteroassociative - evocation of associated information (friend’s picture  name) J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  7. Hopfield Model • Auto-associative memory • Topology - cyclic network with completely interconnected n neurons • 1, …, n  Z - internal potentials • y1, …, yn  {-1,1} - bipolar outputs • wji  Z - connection from i-th to j-th neuron • wjj = 0 (j = 1, …, n) J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  8. Adaptation According to Hebb Rule • Hebb Rule - synaptic strengths in the brain change in response to experience Changes are proportional to the correlation between the firing of the pre- and post-synaptic neurons. • Technically: • training set: T = {xk | xk = (xk1, …, xkn)  {-1,1}n, k = 1, …, p} 1. Start with wji = 0 (j = 1, …, n; i = 1, …, n) 2. For the given training set do 1 jin J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  9. Remarks on Hebb Rule • Training examples are represented in the net through the relations between neurons’ states • Symmetric network: wji = wij • Adaptation can be represented as voting of examples about the weights: xkj = xki (YES) vs. xkjxki (NO) • sign of the weight • absolute value of the weight J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  10. Active Mode of Hopfield Network 1. Set yi = xi (i = 1, …, n) 2. Go through all neurons and at each time step select one neuron j to be updated according the following rule: • compute its internal potential: • set its new state: 3. If not stable configuration then go to step 2 else end - output of the net is determined by the state of neurons. J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  11. Energy Function and Energy Landscape • Energy function: • Energy landscape: • high energy - unstable states • low energy - more stable states • energy always decreases (or remain constant) as the system evolves according to its dynamical rule J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  12. Energy Landscape • Local minima of the energy function represent stored examples - attractors • Basins of attraction - catchment areas around each minimum • False local optima - phantoms J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  13. Storage Capacity of Hopfield Network • Random patterns with equal probability. • Perror - probability, that any chosen bit is unstable • depends on the number of units n and the number of patterns p • Capacity of the network - maximum number of patterns that can be stored without unacceptable errors. • Results: p 0.138n - training examples as local minima of E(y) p < 0.05n - training examples as global minima of E(y), deeper minima than those corresponding to phantoms Example: 10 training examples, 200 neurons  40000 weights J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  14. Hopfield Network: Example • Pattern recognition: • 8 examples, matrix 1210 pixels  120 neurons • input pattern with 25% wrong bits J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  15. Selforganisation • Unsupervised learning • a network must discover for itself patterns, features, regularities, or categories in the input data and code them in the output • Units and connections must display some degree of selforganisation • Competitive learning • output units compete for being excited • only one output unit is on at a time (winner-takes-all mechanism) • Feature mapping • development of significant spatial organisation in the output layer • Applications: • function approximation, image processing, statistical analysis • combinatorial optimisation J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  16. Selforganising Network • Goal is to approximate the probability distribution of real-valued input vectors with a finite set of units • Given the training set T of training examples xRnand a number of representatives h • Network topology: • Weights belonging to one output unit determine its position in the input space • Lateral inhibitions J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  17. Selforganising Network and Kohonen Learning • Principal: Go through the training set and for each example select the winner output neuron j and modify its weights as follows wji= wji + (xi - wji) where real parameter 0<<1 determines the scale of changes • winner neuron is shifted towards the current input in order to improve its relative position • k-means clustering J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  18. Kohonen Selforganising Maps • Topology - as in the previous case • no lateral connections • output units formed in a structure defining neighbourhood • one- or two-dimensional array of units J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  19. Kohonen Selforganising Maps • Neighbourhood of the output neuron c Ns(c) = {j; d(j,c)  s} defines a set of neurons whose distance from c is less than s. • Learning algorithm: • weight update rule involves neighbourhood relations • weights of the winner as well as the units close to him are changed according to wji= wji + hc(j)(xi - wji) j  Ns(c) where or Gaussian function • closer units are more affected than those further away J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

  20. Kohonen Maps: Examples J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control

More Related