1 / 38

Neural Networks- a brief intro Dr Theodoros Manavis tmanavis@ist.gr

Neural Networks- a brief intro Dr Theodoros Manavis tmanavis@ist.edu.gr. Neural Networks: Introduction.

Télécharger la présentation

Neural Networks- a brief intro Dr Theodoros Manavis tmanavis@ist.gr

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks- a brief intro Dr TheodorosManavis tmanavis@ist.edu.gr

  2. Neural Networks: Introduction • Motivation for artificial neural networks (more commonly referred to as neural networks): the fact that the human brain computes in an entirely different manner from a conventional computer. • Neural networks are an attempt to mimic the human brain’s nonlinear and parallel processing capability for applications such pattern recognition, classification, data mining, speech recognition, etc. • Similar to the human brain, a neural network acquires knowledge through a learning process. The inter-neuron connection strengths known as synaptic weights are used to store the knowledge. • Neural networks model problems for which an explicit mathematical relationship is not known. 

  3. Neural Networks: Models • Pattern recognition • Classification • Data mining • Speech recognition • …

  4. Neural Networks: Introduction • Neural networks learn relationships between input and output or organize large quantities of data into patterns. • Neural networks model biological neurons inside the human brain. But neural networks certainly do NOT think. Neither will they ever be a "black-box" solution. •  Neural networks have been used to solve numerous problems in many disciplines (Computer Science, Mathematics, Physics, Medicine, Marketing, Electronics, etc.). 

  5. Neural Networks: How they work • Neural networks are composed of simple elements operating in parallel. • Inspired by biological nervous systems. As in nature, their output is determined by the connections between elements. • We cantrain a neural network to perform a particular function by adjusting the valuesof the connections (weights) between elements. Neural networks are adjusted, or trained, so that a particular inputleads to a specific target output.

  6. Neural Networks:Key features… • Not programmed because they learn by example.  • Cangeneralizefrom their training data to other 'new' data.  That is, the ability to interpolate from a previous learning experience.  e.g. broomstick example.   • Fault tolerant.  That is, they can produce a reasonably correct output from noisy and incomplete data, whereas conventional computers usually require correct data. • Fastbecause many interconnected processing units work in parallel.  Again modeled on the human brain, which may contain a 100 billion neurons.  Note that most of these are not replaced as you get older!

  7. …Neural Networks:Key features • Relatively cheap to build, but computationally intensive to train. • Particularly good in problems whose solution is complex and difficult to specify, but which provides an abundance of data from which a response can be learnt. • Can be trained to generate non-linear mappings i.e. work on real world problems!   e.g predicting the weather. • NNs cannot magically create information that is not contained in the training data.

  8. Neural Networks: Supervised Learning In the above figure, thenetwork is adjusted, based on a comparison of the output and the target, untilthe network output matches the target. Many such input/target pairsare used, in this supervised learning, to train a network.

  9. Neural Networks: Supervised Learning

  10. Neural Networks: Classification This Neural Network… …can perform this classification

  11. Neural Networks: Prediction Stock Value Prediction

  12. When should Neural Networks be used ? • The solution to the problem cannot be explicitly described by an algorithm, a set of equations, or a set of rules • There is some evidence that an input-output relationship exists between a set of input variables x and corresponding output data y • There should be a large amount of data available to train the neural network. • In practice, NNs are especially useful for classification and function approximation problems

  13. Problems which can lead to poor performance • The training data do not represent all possible cases that the network will encounter in practice. • The main factors are not present in the available data. E.g. trying to determine the loan application without having knowledge of the applicant's salaries. • The network is required to implement a very complex function.

  14. Neural Networks: Categories The two main kinds of learning algorithms are supervised and unsupervised. • In supervised learning, the correct results (target values, desired outputs) are known and are given to the NN during training so that the NN can adjust its weights to try match its outputs to the target values. After training, the NN is tested by giving it only input values, not target values, and seeing how close it comes to outputing the correct target values. • In unsupervised learning, the correct results are not known during training. Unsupervised NNs usually perform some kind of data compression, such as dimensionality reduction or clustering.

  15. Neural Networks: Categories Two major kinds of network topology are feedforward and feedback. • In a feedforward NN, the connections between units do not form cycles. Feedforward NNs usually produce a response to an input quickly. Most feedforward NNs can be trained using a wide variety of efficient conventional numerical methods in addition to algorithms invented by NN reserachers. • In a feedback or recurrent NN, there are cycles in the connections. In some feedback NNs, each time an input is presented, the NN must iterate for a potentially long time before it produces a response. Feedback NNs are usually more difficult to train than feedforward NNs.

  16. Neural Networks: Categories NNs also differ in the kinds of data they accept. Two major kinds of data are categorical and quantitative. • Categorical variables take only a finite number of possible values, and there are usually several or more cases falling into each category. Categorical variables may have symbolic values (e.g., "male" and "female", or "red", "green" and "blue") that must be encoded into numbers before being given to the network. [CLASSIFICATION] • Quantitative variables are numerical measurements of some attribute, such as length in meters. The measurements must be made in such a way that at least some arithmetic relations among the measurements reflect analogous relations among the attributes of the objects that are measured. [REGRESSION]

  17. Stages of solving problems using Neural Networks The steps of a neural network solution are: • Stage 1: The collection, preparation and analysis of the training data(Neural Networks very rarely operate on the raw data. An initial pre-processing stage is essential). • Stage 2: The design, training and testing of the neural network

  18. Data Collection and preparation • The quality of the neural network will therefore critically depend on the quality and quantity of the training data. • Neural networks are great for data-fusion i.e. combining different types of data from different sources. For example the state of a motor could be determined by a network trained on measurements of sound, temperature, vibrations and flow rate of the lubricant. • Prediction and classification applications, require target data and supervised learning. • Preprocessing of the raw data is a very important stage in order to identify outliers in the data set. An outlier is an unusually large or small value. • There must be a sufficient number of training examples to ensure that the neural network is trained to recognize and respond to the full possible range of inputs.

  19. Design of Neural Networks The aim is to perform a specific mapping between input and output. In order to do this we need to: • Choose the number of layers in the network • Choose the number of neurons in each layer. • Choose the transfer function for each layer. • Train the network to give the required mapping (training means calculate the weights and biases that will do that). • Check that the network generalises as expected.

  20. Generalisation in Neural Networks In training, we calculate the weights of the network by using a number of input and output pairs (this means that for specific inputs we know what the output should be and we provide this information to the network so that we will make it “learn”). The network however, must be able to generalise well after the training (otherwise the training is not correct). Generalisationmeans that when we present to the network an input that it has not seen before, we must get from the network an output which is the same or very close to the expected output.

  21. Generalisation in Neural Networks In order to achieve good generalisation, the training set (=the pairs of known inputs and outputs) must be well designed. In a well designed training set, the training inputs must: • Span (=cover) the whole range of likely inputs to the network • The training inputs are sufficiently dense (=πυκνά) over the range of possible inputs to allow accurate interpolation (=calculation of intermediate unknown values)

  22. Unsupervised Neural Networks (SOM) Classifying Faces • Can extend definition of ‘feature set’ to allow classification of ‘face’ objects by gender, age, race, mood, ...

  23. Unsupervised Neural Networks (SOM) SOM – Visualisation (1) ‘Poverty map’ based on 39 indicators from World Bank statistics (1992)

  24. Unsupervised Neural Networks (SOM) SOM – Visualisation (2)

  25. Unsupervised Neural Networks (SOM)

  26. Unsupervised Neural Networks (SOM)

  27. Unsupervised Neural Networks (SOM)

  28. Unsupervised Neural Networks (SOM)

  29. Tutorials - Labs

  30. Software to use: MATLAB

  31. (Software to use: RapidMiner)

  32. (Software to use: RapidMiner)

  33. Genetic Algorithms - Optimization

  34. The evolutionary cycle Selection Recombination Replacement Mutation The Evolutionary Cycle

  35. Advantages & Disadvantages of GAs ADVANTAGES(CHARACTERISTICS OF EAs): • EAs can quickly locate areas of high quality solutions when the domain is very large or complex (GAS can quickly explore huge search spaces and find those regions that are more likely to contain the solution) • Can scan through the global space simultaneously instead of restricting themselves to localized regions of gradient shifts • fault tolerance • Require little knowledge of the problem to be solved DISADVANTAGE : Slow in LOCAL FINE TUNING in comparison to gradient methods  Hybrid training can speed up convergence

  36. GAs are good at optimisation

  37. Evolutionary Computation & ANNs Other name:Adaptive learning, because they combine the learning power of Nnets and the adaptive capabilities of evolutionary processes. Acompletely functioningnetwork(weights and architecture) can be evolved with little human interaction VERY ATTRACTIVE IDEA BEHIND EVOLUTIONARY NEURAL NETWORKS

  38. Thank You for Your Attention 

More Related