1 / 33

PYTHON

PYTHON. Deep Learning. Prof. Muhammad Saeed. Deep Learning. Introduction. Deep Learning. Deep Learning. Biological Neuron. 100 billion neurons in human brain (average). 100 – 500 trillion synaptic connections. Deep Learning. Synapse vs. weight. Deep Learning. McCulloch-Pitts Neuron.

coble
Télécharger la présentation

PYTHON

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PYTHON Deep Learning Prof. Muhammad Saeed

  2. Deep Learning Introduction Dept. Of Computer Science & IT FUUAST

  3. Deep Learning Dept. Of Computer Science & IT FUUAST

  4. Deep Learning Biological Neuron 100 billion neurons in human brain (average) 100 – 500 trillion synaptic connections Dept. Of Computer Science & IT FUUAST

  5. Deep Learning • Synapse vs. weight Dept. Of Computer Science & IT FUUAST

  6. Deep Learning McCulloch-Pitts Neuron In 1943, Warren McCulloch and Walter Pitts, published the first ‘neural network’. Their "neurons" operated under the following assumptions: 1. They are binary devices (Vi = [0,1]) 2. Each neuron has a fixed threshold, theta 3. The neuron receives inputs from excitatory synapses, all having identical weights. 4. Inhibitory inputs have an absolute veto power over any excitatory inputs. 5. At each time step the neurons are simultaneously (synchronously) updated by summing the weighted excitatory inputs and setting the output (Vi) to 1 iff the sum is greater than or equal to the threholdAND if the neuron receives no inhibitory input. FUUAST Computer Science

  7. Deep Learning McCulloch-Pitts Neuron for AND Function McCulloch-Pitts Neuron for OR Function Dept. Of Computer Science & IT FUUAST

  8. Deep Learning McCulloch-Pitts Neuron for XOR Function Dept. Of Computer Science & IT FUUAST

  9. Deep Learning McCulloch-Pitts Neuron A NOR gate gives you an output of 1 only when all inputs are zero Dept. Of Computer Science & IT FUUAST

  10. Deep Learning McCulloch-Pitts Neuron A NAND gate gives you an output of 0 only when all inputs are 1 Dept. Of Computer Science & IT FUUAST

  11. Deep Learning Perceptron Perceptrons(Rosenblatt 1958, Minsky/Papert 1969) are generalized variants of a former, more simple model (McCulloch/Pitts neurons, 1942): • Inputs are weighted • Weights are real numbers (positive and negative) • No special inhibitory inputs Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks. Dept. Of Computer Science & IT FUUAST

  12. Deep Learning Rosenblatt Perceptron Dept. Of Computer Science & IT FUUAST

  13. Deep Learning Structure of a node: • Squashing (Transfer) functions limit node output: 1. 2. Dept. Of Computer Science & IT FUUAST

  14. Deep Learning • Squashing (Transfer ) Functions 3. 4. Dept. Of Computer Science & IT FUUAST

  15. Deep Learning Feeding data through the net: (1  0.25) + (0.5  (-1.5)) = 0.25 + (-0.75) = - 0.5 Squashing: Dept. Of Computer Science & IT FUUAST

  16. Deep Learning Problems with Perceptrons Frank Rosenblatt proved mathematically that the perceptron learning rule converges if the two classes can be separated by linear hyperplane, but problems arise if the classes cannot be separated perfectly by a linear classifier. Dept. Of Computer Science & IT FUUAST

  17. Deep Learning Artificial Neural Networks …… Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated. Dept. Of Computer Science & IT FUUAST

  18. Deep Learning ……. Artificial Neural Networks • Neural networks help us cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. (Neural networks can also extract features that are fed to other algorithms for clustering and classification Dept. Of Computer Science & IT FUUAST

  19. Deep Learning ANNs – The Basics ANNs incorporate the two fundamental components of biological neural nets: • Neurones (nodes) • Synapses (weights) Layer 3 Layer 1 Layer 2 Dept. Of Computer Science & IT FUUAST

  20. Deep Learning One Layer Neural Network Dept. Of Computer Science & IT FUUAST

  21. Deep Learning Multiple Layers Of Neurons Dept. Of Computer Science & IT FUUAST

  22. Deep Learning Deep Neural Networks A deep neural network is a neural network with a certain level of complexity, a neural network with more than two layers. Deep neural networks use sophisticated mathematical modeling to process data in complex ways. FUUAST Computer Science

  23. Deep Learning Multiple Layer Neural Networks Dept. Of Computer Science & IT FUUAST

  24. Deep Learning ANN Architecture • Multilayer Perceptron (MLP) • Radial Basis Function Networks(RBF) • Self-Organizing Feature Maps(SOFM) • Recurrent (Feedbackward) Learning • Supervised Learning • Unsupervised Learning Dept. Of Computer Science & IT FUUAST

  25. Deep Learning Feed-forward nets • Information flow is unidirectional • Data is presented to Input layer • Passed on to Hidden Layer • Passed on to Output layer • Information is distributed • Information processing is parallel Internal representation (interpretation) of data Dept. Of Computer Science & IT FUUAST

  26. Deep Learning • Data is presented to the network in the form of activations in the input layer • Examples • Pixel intensity (for pictures) • Molecule concentrations (for artificial nose) • Share prices (for stock market prediction) • Datausually requires preprocessing • Analogous to senses in biology • How to represent more abstract data, e.g. a name? • Choose a pattern, e.g. • 0-0-1 for “Zeeshan” • 0-1-0 for “AleRaza” Dept. Of Computer Science & IT FUUAST

  27. Deep Learning Applications of Feed-forward nets • Pattern recognition • Character recognition • Face Recognition • Sonar mine/rock recognition (Gorman & Sejnowksi, 1988) • Navigation of a car (Pomerleau, 1989) • Stock-market prediction • Pronunciation (NETtalk) (Sejnowksi & Rosenberg, 1987) Dept. Of Computer Science & IT FUUAST

  28. Deep Learning Feed-backward Neural Networks Feedback(or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. The learning rate is introduced as a constant (usually very small), in order to force the weight to get updated very smoothly and slowly (to avoid big steps and chaotic behaviour). Dept. Of Computer Science & IT FUUAST

  29. Deep Learning Recurrent Networks • Feed forward networks: • Information only flows one way • One input pattern produces one output • No sense of time (or memory of previous state) • Recurrency • Nodes connect back to other nodes or themselves • Information flow is multidirectional • Sense of time and memory of previous state(s) • Biological nervous systems show high levels of recurrency (but feed-forward structures exists too) Dept. Of Computer Science & IT FUUAST

  30. Deep Learning Training the Network – Learning • Backpropagation • Requires training set (input / output pairs) • Starts with small random weights • Error is used to adjust weights (supervised learning)  Gradient descent on error landscape Weight settings determine the behaviour of a network. Dept. Of Computer Science & IT FUUAST

  31. Deep Learning New weight = old weight — Derivative Rate * learning rate Dept. Of Computer Science & IT FUUAST

  32. Deep Learning Pros & Cons • Advantages • It works! • Relatively fast • Downsides • Requires a training set • Can be slow • Probably not biologically realistic • Alternatives to Backpropagation • Hebbian learning: Not successful in feed-forward nets • Reinforcement learning : Only limited success • Artificial evolution: More general, but can be even slower than backprop Dept. Of Computer Science & IT FUUAST

  33. Deep Learning The End Dept. Of Computer Science & IT FUUAST

More Related