1 / 184

Artificial Neural Network (ANN)

Artificial Neural Network (ANN). Neural network -- “ a machine that is designed to model the way in which the brain performs a particular task or function of interest ” (Haykin, 1994, pg. 2). Uses massive interconnection of simple computing cells (neurons or processing units).

lixue
Télécharger la présentation

Artificial Neural Network (ANN)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Networks

  2. Artificial Neural Network (ANN) • Neural network -- “a machine that is designed to model the way in which the brain performs a particular task or function of interest”(Haykin, 1994, pg. 2). • Uses massive interconnection of simple computing cells (neurons or processing units). • Acquires knowledge thru learning. • Modify synaptic weights of network in orderly fashion to attain desired design objective. • Attempts to use ANNs since 1950’s. • Abandoned by most by 1970’s. Neural Networks

  3. Artificial Intelligence (AI) • “A field of study that encompasses computational techniques for performing tasks that apparently require intelligence when performed by humans” (Tanimoto, 1990). • Goal to increase our understanding of reasoning, learning, & perceptual processes. • Knowledge representation. • Search. Fundamental Issues • Perception & inference. Neural Networks

  4. Traditional AI: Programs brittle & overly sensitive to noise. Programs either right or fail completely. Human intelligence much more flexible (guessing). http://www-ai.ijs.si/eliza/eliza.html Neural Networks: Capture knowledge in large # of fine-grained units. More potential for partially matching noisy & incomplete data. Knowledge is distributed uniformly across network. Model for parallelism – each neuron is independent unit. Similar to human brains? Traditional AI vs. Neural Networks Neural Networks

  5. Artificial Neural Networks Biologically Inspired Computing Parallel Distributed Processing Machine Learning Algorithms Neural Networks Connectionism Natural Intelligent Systems Neuro-computing

  6. Handwriting Neural Network • http://www.youtube.com/watch?v=qXoVGxjUTtA Neural Networks

  7. http://www.manifestation.com/neurotoys/eliza.php3 Neural Networks

  8. NETtalk (Sejnowski & Rosenberg) • http://cnl.salk.edu/Media/nettalk.mp3. Neural Networks

  9. Human Brain • “… a highly complex, nonlinear, and parallel computer (information-processing system). It has the capability to organize its structural constituents, know as neurons, so as to perform certain computations (e.g., pattern recognition, perception, and motor control) many times faster than the fastest digital computer in existence today.” (Haykin, 1999, Neural Networks: A Comprehensive Foundation, pg. 1). Neural Networks

  10. Approaches to Studying Brain • Know enough neuroscience to understand why computer models make certain approximations. • Understand when approximations are good & when bad. • Know tools of formal analysis for models. • Some simple mathematics. • Access to simulator or ability to program. • Know enough cognitive science to have some idea of about what the system is supposed to do. Neural Networks

  11. Why Build Models? “… a model is simply a detailed theory.” • Explicitness – constructing model of theory & implementing it as computer program requires great level of detail. • Prediction – difficult to predict consequences of model due to interactions between different parts of model. • Connectionist models are non-linear. • Discover & test new experiments & novel situations. • Practical reasons why difficult to test theory in real world. • Systematically vary parameters thru full range of possible values. • Help understand why a behavior might occur. • Simulations open for direct inspections  explanation of behavior. Neural Networks

  12. Simulations As Experiments • Easy to do simulations, but difficult to do them well. • Running a good simulation like running good experiment. • Clearly articulated problem (goal). • Well-defined hypothesis, design for testing hypothesis, & plan how to the results. • Hypothesis from current issues in literature. • E.g., test predictions, replicate observed behaviors, test theory of behavior. • Task, stimulus representations & network architectures must be defined. Neural Networks

  13. What kinds of problems can ANNs help us understand? • Brain of newborn child contains billions of neurons • But child can’t perform many cognitive functions. • After a few years of receiving continuous streams of signals from outside world via sensory systems, • Child can see, understand language & control movements of body. • Brain discovers, without being taught, how to make sense of signals from world. • How??? • Where do you start? Neural Networks

  14. NN Applications http://www-cs-faculty.stanford.edu/~eroberts/courses/soco/projects/2000-01/neural-networks/Applications/index.html • Character recognition • Image compression • Stock market prediction • Traveling salesman problem • Medicine, electronic noise, loan applications Neural Networks

  15. Neural Networks (ACM) • Web spam detection by probability mapping graphSOMs and graph neural networks • No-reference quality assessment of JPEG images by using CBP neural networks • An Embedded Fingerprints Classification System based on Weightless Neural Networks • Forecasting Portugal global load with artificial neural networks • 2006 Special issue: Neural network forecasts of the tropical Pacific sea surface temperatures • Developmental learning of complex syntactical song in the Bengalese finch: A neural network model • Neural networks in astronomy Neural Networks

  16. Artificial & Biological Neural Networks • Build intelligent programs using models that parallel structure of neurons in human brain. • Neurons – cell body with dendrites & axon. • Dendrites receive signals from other neurons. • When combined impulses exceed threshold, neuron fires & impulse passes down axon. • Branches at end of axon form synapses with dendrites of other neurons. • Excitatory or inhibitory. Neural Networks

  17. Do Neural Networks Mimic Human Brain? • “It is not absolutely necessary to believe that neural network models have anything to do with the nervous system, … • … but it helps. • Because, if they do, we are able to use a large body of ideas, experiments, and facts from cognitive science and neuroscience to design, construct, and test networks.” (Anderson, 1997, p. 1) Neural Networks

  18. Neural Networks Abstract From the Details of Real Neurons • Conductivity delays are neglected. • Net input is calculated as weighted sum of input signals. • Net input is transformed into an output signal via a simple function (e.g., a threshold function). • Output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1). Neural Networks

  19. Neural Networks

  20. ANN Features • A series of simple computational elements, called neurons (or nodes, units, cells) • Connections between neurons that carry signals • Each link (connection) between neurons has a weight that can be modified • Each neuron sums the weighted input signals and applies an activation function to determine the output signal (Fausett, 1994). Neural Networks

  21. Neural Networks Are Composed of Nodes & Connections • Nodes – simple processing units. • Similar to neurons – receive inputs from other sources. • Excitatory inputs tend to increase neuron’s rate of firing. • Inhibitory inputs tend to decrease neuron’s rate of firing. • Firing rate changes via real-valued number (activation). • Input to node comes from other nodes or from some external source. Fully recurrent network 3-layer feed forward network Neural Networks

  22. Connections • Input travels along connection lines. • Connections between different nodes can have different potency (connection strength) in many models. • Strength represented by real-valued number (connection weight). • Input from one node to another is multiplied by connection weight. • If connection weight is • Negative number – input is inhibitory. • Positive number – input is excitatory. Neural Networks

  23. Nodes & Connections Form Various Layers of NN Neural Networks

  24. Inputs from other nodes  f(net) Outputs to other nodes A Single Node/Neuron • Inputs to node usually summed (  ). • Net input passed thru activation function ( f(net) ). • Produces node’s activation which is sent to other nodes. • Each input line (connection) represents flow of activity from some other neuron or some external source. Neural Networks

  25. Input signals wk1 wk2 … … wkp x1 x2 xp Linear Combiner Output Activation Function Output yk uk (-)  Summing function k Threshold Synaptic weights of neuron More Complex Model of a Neuron Neural Networks

  26. Add up Net Inputs to Node • Each input (from different nodes) is calculated by multiplying activation value of input node by weight on connection (from input node to receiving node). neti =  wijajNet input to node i j •  = sigma (summation) • i = receiving node • aj= activation on nodes sending to node i • wij = weights on connection between nodes j & i. Neural Networks

  27. 4 Sums (weights * activation) For All Input Nodes neti =  wijaj j • i = 4 (node 4). • j = 3 (3 input nodes into node 4). • add up wij * ai for all 3 input nodes. 0 1 2 Neural Networks

  28. Activation Functions : Node Can Do Several Things With Net Input • Activation (e.g., output) = Input. • (f(net)) is Identity function. • Simplest case. • Threshold must be achieved before activation occurs. • Activation function may be non-linear function of input. Resembles sigmoid. • Activation function may be linear. Real neurons Neural Networks

  29. Different Types of NN Possible • Single layer or multi-layer architectures (Hopfield, Kohonen). • Data processing. thru network. • Feedforward. • Recurrent. • Variations in nodes. • Number of nodes. • Types of connections among nodes in network. • Learning algorithms. • Supervised. • Unsupervised (self-organizing). • Back propagation learning (training). • Implementation. • Software or hardware. Neural Networks

  30. Neural Networks

  31. Steps in Designing a Neural Network • Arrange neurons in various layers. • Decide type of connections among neurons for different layers, as well as among neurons within layer. • Decide way a neuron receives input & produces output. • Determine strength of connection within network by allowing network to learn appropriate values of connection weights via training data set. Neural Networks

  32. Activation Functions • Identity function: f(x) = x for all x • Binary step function: f(x) = 1 if x >= θ; f(x) = 0 if x < θ • Continuous log-sigmoid function (Logistic function): f(x) = 1/[1 + exp(-σx)] Neural Networks

  33. Sigmoid Activation Function • a i = activation (output) of node i • net i = net activation flowing into node i • e = exponential • What output of node will be for any given net input. • Graph of relationship (next slide). Neural Networks

  34. nothing or all Sigmoid Activation Function Often Used for Nodes in NN • For wide range of inputs (> 4.0 & < -4.0), nodes exhibit all or nothing. • Output max. value of 1(on). • Output min. value of 0 (off). • Within range of –4.0 to 4.0, nodes show greater sensitivity. • Output capable of making fine discriminations between different inputs. • Non-linear response is at heart of what makes these networks interesting.

  35. What will be the activation of node 2, assuming the input you just calculated? If node 2 receives input of 1.25, activation of 0.777. Activation function scales from 0.0 to 1.0. When net input = 0.0, net output is exact mid-range of possible activation (0.5). Negative inputs Neural Networks

  36. Output nodes Input nodes a2 w20 w21 a0 a1 Example 2-Layered Feedforward Network : Step Thru Process • Neural network consists of collection of nodes. • Number & arrangement of nodes defines network architecture. • Example 2-layered feedforward. • 2 layers (input, output). • no intra-level connections. • no recurrent connections. • single connection into input nodes & out of output nodes. • Very simplified in comparison to biological neural network! 2-layered feedforward network Neural Networks

  37. a2 w20 w21 a0 a1 • Each input node has certain level of activity associated with it. • 2 input nodes (a0, a1). • 2 output nodes (a2, a3). • Look at one output unit (a2). • Receives input from a0 & a1 via independent connections. • Amount depends on activation values of input nodes (a0 & a1) and weights (w20, w21). • For this network, activity flows in 1 direction along connections. • e.g., w20  w02 • w02 doesn’t exist • Total input to node 2 (a2) = w20a0 + w21a1. wij = 20 when i = 0 & j = 2 Neural Networks

  38. Exercise 1.1 What is the input received by node 2? Net input for node 2 = (1.0 * 0.75) + (1.0 * 0.5) = 1.25 Net input alone doesn’t determine activity of output node. Must know activation function of node. Assume nodes have activation functions shown in EQ 1.2 (& Fig. 1.3). Next slide shows sample inputs & activations produced - assuming logistic activation function. a2 0.75 0.5 1 1 Neural Networks

  39. Bias Node (Default Activation) • In absence of any input (i.e. input = 0.), nodes have output of 0.5. • Useful to allow nodes to have default activation. • Node is “off” (output 0.0) in absence of input. • Or can have default state where node is “on”. • Accomplish this by adding node to network which receives no inputs, but is always fully activated & outputs 1.0 (bias node). • Node can be connected to any node in network. • Often connected to all nodes except input nodes. • Allow weights on connections from this node to receiving nodes to be different. Neural Networks

  40. Guarantees that all receiving nodes have some input even if all other nodes are off. • Since output of bias node is always 1.0, input it sends to any other node is 1.0 * wij (value of weight itself). • Only need one bias node per network. • Similar to giving each node a variable threshold. • large negative bias == node is off (activation close to 0.0) unless gets sufficient positive input from other sources to compensate. • large positive bias == receiving node is on & requires negative input from other nodes to turn it off. • Useful to allow individual nodes to have different defaults. Neural Networks

  41. Learning From Experience • Changing of neural networks connection weights (training) causes network to learn solution to a problem. • Strength of connection between neurons stored as weight-value for specific connection. • System learns new knowledge by adjusting these connection weights. Neural Networks

  42. Three Training Methods for NN • Unsupervised learning – hidden neurons must find a way to organize themselves without help from outside. • No sample outputs provided to network against which it can measure its predictive performance for given vector of inputs. • Learning by doing.

  43. 2. Supervised Learning (Reinforcement) • works on reinforcement from outside. • Connections among neurons in hidden layer randomly arranged, then reshuffled as network told how close it is to solution. • Requires teacher -- training set of data or observer who grades performance of network results. • Both unsupervised & supervised suffer from relative slowness & inefficiency relying on random shuffling to find proper connection weights. Neural Networks

  44. 3. Back Propagation • Network given reinforcement for how it is doing on task plus information about errors is used to adjust connections between layers. • Proven highly successful in training of multilayered neural nets. • Form of supervised learning. Neural Networks

  45. Example Learning Algorithms • Hebb’s Rule -- how physical networks might learn. • Perceptron Convergence Procedures (PCP). • Widrow-Hoff Learning Rule (1960s). • Hopfield. • Backpropagation of Error (Generalized Delta Rule). • Kohonen’s Learning Laws (not covered here). Neural Networks

  46. McCulloch-Pitts (1943) Neuron • Activity of neuron is an “all-or-none” process. • Certain fixed number of synapses must be excited within period of latent addition to excite neuron at any time. • Number is independent of previous activity & position of neuron. • Only significant delay within nervous system is synaptic delay. • Activity of any inhibitory synapse absolutely prevents excitation of neuron at that time. • Structure of net does not change with time. Neural Networks

  47. McColloch-Pitts Neuron • Firing within a neuron is controlled by a fixed threshold (θ). • binary step function: f(x) = 1 if x >= θ; f(x) = 0 if x < θ. • What happens here if θ = 2? Neural Networks

  48. McColloch-Pitts Neuron AND Threshold = 2 Does a2 fire? Neural Networks

  49. McColloch-Pitts Neuron OR Threshold = 2 Does a2 fire? Neural Networks

  50. McColloch-Pitts Neuron XOR Threshold = 2 Does a2 fire? Neural Networks

More Related