1 / 27

Deep Belief Networks

Deep Belief Networks. Psychology 209 February 22 , 2013. Why a Deep Network?. Why not just one layer of hidden units? Fails to capture constraints on the problem. For many problems, requires exponential hardware. Two examples: Parity Letters x positions. But, says Le Cun….

hani
Télécharger la présentation

Deep Belief Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Deep Belief Networks Psychology 209February 22, 2013

  2. Why a Deep Network? • Why not just one layer of hidden units? • Fails to capture constraints on the problem. • For many problems, requires exponential hardware. • Two examples: • Parity • Letters x positions

  3. But, says Le Cun…

  4. Stacked Auto-Encoders • To capture intermediate level structure, one might use stacked auto-encoders. • But, training can be very slow as more layers are added. • Backprop slows exponentially in the number of layers

  5. The deep belief network vision (Hinton) • Consider some sense data D • We imagine our goal is to understand what generated it • We use a generative model • Search for the most probable ‘cause’ C of the data • The one where p(D|C)p(C) is greatest • How do we find C? Cause Data

  6. One and Two Layer Belief Networks How should we train such networks?

  7. ‘Greedy’ layerwise learning of RBM’s First learn H0 based on input. Then learn H1 based on H0 Etc… Then ‘fine tune’ says Hinton Stacking RBM’s

  8. Test Procedure • Generation: • Clamp a digit identity • Do ‘alternating Gibbs sampling’ from random starting image; send state back down to see what it is like • Recognition • Clamp input pattern on ‘retina’ • Feed up, perform alternating Gibbs sampling at top levels. Check out the movie: http://www.cs.toronto.edu/~hinton/digits.html

  9. Close Calls (49) and Errors (125) out of 10,000 Test Digits

  10. But it doesn’t always work so well We need to reduce the Energy (increase the goodness) of the sample data (Y) and decrease the goodness of everything else (Y’) But there is too much ‘everything else’. That’s great says Yann LeCun… Y’

  11. LeCun’s view of Stacked Encoder Networks • Think of each layer as an encoder-decoder pair learning to minimize its own ‘reconstruction error’ ~ ‘maximize the probability of the training data’ • Starting from this, can we make the encoder/decoder more powerful and also more constrained than an RBM?

  12. Two New Ideas and One Old • Force the representation to be sparse • Can’t represent too many possibilities, so makes most of the input bad automatically! • Just pull down the Energy of the samples and the rest will take care of itself! • Let the Encoder be as smart as you want it to be. • Why just use one feed-forward layer on the encoder side of each layer? Why not use the full potential of a multi-layer network? • Force invariance by re-using the same weights at many positions across lower layers

  13. IMAGENET Large Scale Visual Recognition Challenge 2012 • Tasks: • Classification • Classification with Localization • Training data: 1.2 M images from 1,000 classes. • English setter • Granny Smith • Ladle • Validation set: 50,000 images not in training set • Test set: 100,000 images not in Validation or training set. • An item is scored as correct if the correct answer is one of the network’s top 5 guesses

  14. The Results Classification • Team Error RateSuperVision .164Runner-Up .262 Localization • Team Error RateSuperVision .342Runner-Up .500 • SuperVision Team: Alex KrizhevskyIlyaSutskever Geoffrey Hinton SuperVision Model:Our model is a large, deep convolutional neural network trained on raw RGB pixel values. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three globally-connected layers with a final 1000-way softmax. It was trained on two NVIDIA GPUs for about a week. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of convolutional nets. To reduce overfitting in the globally-connected layers we employed hidden-unit "dropout", a recently-developed regularization method that proved to be very effective. Dropout: For each presentation of an item during learning force a fraction of the hidden units chosen at random to have activation value zero.

More Related