1 / 29

Looking into the Black Box - A Theoretical Insight into Deep Learning Networks

Deep learning is a part of machine learning and artificial intelligence that helps machines without explicit programming.

dineshv23
Télécharger la présentation

Looking into the Black Box - A Theoretical Insight into Deep Learning Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Looking into the Black Box- A Theoretical Insight into Deep Learning Networks

  2. Deep learning is a branch of machine learning which is based on Artificial Neural Networks. As neural networks mimics the human brain, so does the deep learning based on neural networks. What is Deep Learning? Deep learning is a representation learning. The automated formation of useful representations from data. There are variety of deep learning networks such as Multilayer Perceptron (MLP), Autoencoders (AE), Convolution Neural Network (CNN), Recurrent Neural Network (RNN). 02

  3. ArtificialIntelligence MachineLearning DeepLearning Data Science 03

  4. Deep learning models are large and deep Artificial Neural Networks. A neural network (“NN”) can be well presented in a directed acyclic graph. Why Deep Learning is Successful? The input layer takes in signal vectors; one or multiple hidden layers process the outputs of the previous layer. But why does it work now? Why deep learning is successful now than ever? 04

  5. Architecture of Deep Learning Understands the problem & check Feasibility for Deep Learning Choose Deep Learning Algorithm Test the model’s Performance Training Algorithm Identifies Relevant Data & prepares it 05

  6. Deep Learning Model H H I H H I O H H I H H I - Input layer H - Hidden layer O - Output layer 06

  7. Why Deep Learning Performance Deep learning Older learning algorithms Amount of data How do data science techniques scale with amount of data? 07

  8. The reason for Deep learning networks success is: Why is Deep Learning Successful? We have a lot more data. We have much powerful computers. A large and deep neural network has many layers and many nodes in each layer, resulting in many parameters to tune. Without enough data, we cannot learn parameters efficiently. Without powerful computers, learning would be too slow and insufficient. 08

  9. Neural networks are either encoders, decoders, or a combination of both: Why is Deep Learning Successful? Encoders find patterns in raw data to form compact, useful representations. Decoders generate high-resolution data from those representations. The generated data is either new examples or descriptive knowledge. 09

  10. Traditional Pattern Recognition: Fixed/Handcrafted Feature Extractor Feature Extractor Trainable Classifier Mainstream Pattern Recognition: Unsupervised mid-level features Feature Extractor Mid-Level Features Trainable Classifier Deep Learning: Representations are hierarchical and trained High-Level Features Low-Level Features Mid-Level Features Trainable Classifier 10

  11. Types of Supervised Learning Deep Learning Model Feed Forward Neural Networks Unsupervised Learning Convolutional Neural Networks Reinforcement Learning Recurrent Neural Networks Autoencoder Networks for Actions, Values, Policies, & Models Encoder-Decoder Architectures Generative Adversarial Networks 11

  12. FFNNs dating back to 1940s, are networks that don’t have any cycles. Data passes from input to output in a single pass without any “state memory”. Feed Forward Neural Networks (FFNNs) Technically, most networks in deep learning can be considered FFNNs, but usually “FFNN” refers to its simplest variant: a densely- connected multilayer perceptron (MLP). Dense encoders are used to map an already compact set of numbers on the input to a prediction: either a classification (discrete) or a regression (continuous). 12

  13. Feed Forward Neural Networks Input Network Output Ground Truth A Few Numbers Dense Encoder Prediction Prediction Representation 13

  14. CNNs are feed forward neural networks that use a spatial-invariance trick to efficiently learn local patterns, most commonly, in images. Convolutional Neural Networks (CNNs) Spatial-invariance means that a subject's ear in the top left of the image has the same features as a subject's ear in bottom right of the image. CNNs share weights across space to make the detection of subject's ears and other patterns more efficient. Instead convolutional layers (convolutional encoder). These networks are used for image classification, object detection, video action recognition, and any data that has some spatial invariance in its structure. of using only densely-connected layers, they use 14

  15. Convolutional Neural Networks Input Network Output Ground Truth An Convolutional Encoder Prediction Prediction Representation Image 15

  16. RNNs are networks that have cycles and therefore have “state memory”. They can be unrolled in time to become feed forward networks where the weights are shared. Just as CNNs share weights across “space”, RNNs share weights across “time”. This allows them to process and efficiently represent patterns in sequential data. Recurrent Neural Networks (RNNs) Many variants of RNNs modules have been developed, including LSTMs and GRUs, to help learn patterns in longer sequences. Applications include natural language modeling, speech recognition, speech generation, etc. 16

  17. Recurrent Neural Network Input Network Output Ground Truth Recurrent Encoder Sequence Prediction Prediction Representation 17

  18. FFNNs, CNNs, and RNNs presented in first 3 sections are simply networks that make a prediction using either a dense encoder, convolutional encoder, or a recurrent encoder, respectively. These encoders can be combined or switched depending on the kind of raw data we’re trying to form a useful representation of. Encoder-Decoder Architectures “Encoder-Decoder” architecture is a higher-level concept that builds on the encoding step to, instead of making a prediction, generate a high-dimensional output via a decoding step by upsampling the compressed representation. Applications include semantic segmentation, machine translation, etc. 18

  19. Encoder-Decoder Architectures Ground Truth Output Network Network Input Image, Text, etc. Image, Text, etc. Image, Text, etc. Any Any Representation Decoder Encoder 19

  20. Autoencoders are one of the simpler forms of “unsupervised learning” taking the encoder-decoder architecture and learning to generate an exact copy of the input data. Since the encoded representation is much smaller than the input data, the network is forced to learn how to form the most meaningful representation. Autoencoders Since the ground truth data comes from the input data, no human effort is required. In other words, it’s self-supervised. Applications include unsupervised embeddings, image denoising, etc. 20

  21. Autoencoder Ground Truth Network Network Input Image, Text, etc. Exact copy of input Any Any Representation Decoder Encoder 21

  22. Generative Adversarial Networks (GANs) GANs are a framework for training networks optimized for generating new realistic samples from a particular representation. In its simplest form, the training process involves two networks. One network, called the generator, generates new data instances, trying to fool the other network, the discriminator, that classifies images as real or fake. They can generate images from a particular class, the ability to map images from one domain to another, and an incredible increase in realism of generated images. 22

  23. Generative Adversarial Networks Throw away after training Input Network Output Network Ground Truth Prediction Real or Fake Discriminator Fake Image Noise Generator Real Image 23

  24. Deep RL allows us to apply neural networks in simulated or real-world environments when sequences of decisions need to be made. Deep Reinforcement Learning (Deep RL) When the learning is done by a neural network, we refer to it as Deep Reinforcement Learning (Deep RL). There are three types of RL frameworks: policy-based, value-based, and model-based. The distinction is what the neural network is tasked with learning. This includes game playing, robotics , neural architecture search, and much more. 24

  25. Deep Reinforcement Learning Input Network Output Ground Truth World Any Representation Action Reward State Sample Encoder 25

  26. Best in-class performance on problems. Reduces need for feature engineering. Advantages Identifies defects easily that are difficult to detect. Eliminates unnecessary costs. 26

  27. Applications of Deep Learning Models Automatic Text Generation – Corpus of text is learned and from this model new text is generated, word-by-word or character-by- character. Then this model can learn how to spell, punctuate, form sentences, or it may even capture the style. Healthcare – Helps in diagnosing various diseases and treating it. Automatic Machine Translation – Certain words, sentences or phrases in one language is transformed into another language (Deep Learning is achieving top results in the areas of text, images). 27

  28. Applications of Deep Learning Models Image Recognition – Recognizes and identifies people and objects in images as well as to understand content and context. This area is already being used in Gaming, Retail, Tourism, etc. Predicting Earthquakes – Teaches a computer to perform viscoelastic computations which are used in predicting earthquakes. 28

  29. To assist you with our services please reach us at hello@mitosistech.com www.mitosistech.com IND: +91-7824035173 US: +1-(415) 251-2064

More Related