1 / 12

Soft Computing

Soft Computing. Lecture 15 Constructive learning algorithms. Network of Hamming. Definition of constructive learning algorithms (growth neural networks).

iolani
Télécharger la présentation

Soft Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Soft Computing Lecture 15 Constructive learning algorithms. Network of Hamming

  2. Definition of constructive learning algorithms (growth neural networks) • Constructive learning algorithms aides on changing of structure of neural network during learning, i.e. number of neurons (nodes) and connections between them • Neural networks increasing in size is called growth neural networks • To classify as growth neural networks among classical models may be model ART and Hamming network.

  3. Binary inputs and outputs During learning training set is set of binary vectors which is needed to recognized During present new input vector new output neuron is created with weight vector equal input vector During working network calculates hamming differenced between input vector and weight vectors of output neurons Hamming network

  4. Growing Cell Structure (GCS)(Fritzke, 1994) • Based on SOM • A new node is inserted every λ iterations, where λ is a constant, with the node positioned to support the node that has accumulated the highest error during previous steps. • Later this model was developed: • A deleting of node was introduced in model • A adding of several new nodes in begin steps of learning was introduced in model

  5. Growing Neural Gas (GNG)(Fritzke, 1995) • Based on SOM too • For each data sample presented to the network, the two best-matching nodes are selected, that is the two nodes whose weights are closest to the input in the Euclidean sense • A neighbourhood connection is made between the two nodes if it does not already exist, and the positions of these nodes—together with the neighbours of the winning node—are moved so that their weights better match the input • Edges that are not used increase in age, while edges that are used have their age reset to zero. Once the age of an edge exceeds a threshold, that edge is deleted • After λ iterations, the node that has accumulated the highest error during the previous steps is calculated, and a new node is added to support it. The new node is positioned between the node with the highest error and whichever of its neighbours has the next highest error

  6. Reduced CoulombEnergy (RCE) network (1982) • uses prototype vectors to describe particular classes • If none of the current prototype vectors are sufficiently close to the current input, a new class is generated and the input used as the prototype for that cluster • There are no neighbourhood connections between clusters, nor can prototypes move once they have been place • This model is similar to ART but more simple and was before ART

  7. Contextual LayeredAssociative Memory (CLAM) (1990) • Uses a multi-layered network that has feedback between the layers and resonance within a layer • Patterns are classified over a group of nodes rather than using a winner-takes-all approach • Nodes are added when the probability measure indicates that the region of input space that the current input comes from has low density

  8. The Grow When Required network (GWR) (2002) • The network has two important components: • the nodes, with their associated weight vectors, • the edges that link the nodes to form neighbourhoods of nodes that represent similar perceptions • Both the nodes and edges can be created and destroyed during the learning process

  9. The Grow When Required network (2) • For each input an edge connection is generated between the node that best matched the unit and the second-best matching unit. These edge connections have an associated ‘age’. • This is originally set to zero, and is incremented at each time step for each edge that is connected to the winning node. The only exception is the edge that links the best-matching and second best units, whose age is reset to zero. • Edges whose age exceeds some constant amax are removed. Any node that has no neighbours, i.e. that has no edge connections, is removed, as it is a dead node. • A new node is added when the activity of the bestmatching node (which is a function of the distance between the weights of the node and the input) is not sufficiently high.

  10. Constructive algorithms in multi layer neural networks • An idea: • Not guess or calculate how much neurons in hidden layer is needed, but to start learning from any small number and to add new neurons as required • During learning of MLP with excess number of neurons in hidden layer may be case when any neurons really are not participate in recognition. So it is possible delete it and its connections with other neurons

  11. The Cascade-Correlation Learning Architecture (CCLA) (Fahlman & Lebiere, 1990) • Starts with a minimal network and adds units into the network architecture • Nodes are added into the hidden layers of the network • The new units are intended to act as feature detectors and are added when no error reduction has occurred over several training iterations • Its input weights are trained to maximize the correlation between the output of the node and the residual output error before the node is added to the network

  12. Growing RBF-network

More Related