1 / 81

Restricted Boltzmann Machine and Deep Belief Net

Restricted Boltzmann Machine and Deep Belief Net. Wanli Ouyang wlouyang@ee.cuhk.edu.hk. Animation is available for illustration. Outline. RBM and DBN are statistical models. Deep belief net is trained using RBM and CD.

makana
Télécharger la présentation

Restricted Boltzmann Machine and Deep Belief Net

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Restricted Boltzmann Machine and Deep Belief Net WanliOuyang wlouyang@ee.cuhk.edu.hk Animation is available for illustration

  2. Outline RBM and DBN are statistical models Deep belief net is trained using RBM and CD Deep belief net is an unsupervised training algorithm for deep neural network Short introduction on deep learning Short introduction on statistical models and Graphical model Restricted Boltzmann Machine (RBM) and Contrastive divergence Deep belief net (DBN)

  3. Good learning resources • Webpages: • Geoffrey E. Hinton’s readings (with source code available for DBN) http://www.cs.toronto.edu/~hinton/csc2515/deeprefs.html • Notes on Deep Belief Networks http://www.quantumg.net/dbns.php • MLSS Tutorial, October 2010, ANU Canberra, Marcus Frean http://videolectures.net/mlss2010au_frean_deepbeliefnets/ • Deep Learning Tutorials http://deeplearning.net/tutorial/ • Hinton’s Tutorial, http://videolectures.net/mlss09uk_hinton_dbn/ • Fergus’s Tutorial, http://cs.nyu.edu/~fergus/presentations/nips2013_final.pdf • CUHK MMlab project : http://mmlab.ie.cuhk.edu.hk/project_deep_learning.html • People: • Geoffrey E. Hinton’s http://www.cs.toronto.edu/~hinton • Andrew Ng http://www.cs.stanford.edu/people/ang/index.html • Ruslan Salakhutdinov http://www.utstat.toronto.edu/~rsalakhu/ • Yee-Whye Tehhttp://www.gatsby.ucl.ac.uk/~ywteh/ • Yoshua Bengio www.iro.umontreal.ca/~bengioy • Yann LeCun http://yann.lecun.com/ • Marcus Frean http://ecs.victoria.ac.nz/Main/MarcusFrean • Rob Fergus http://cs.nyu.edu/~fergus/pmwiki/pmwiki.php • Acknowledgement • Many materials in this ppt are from these papers, tutorials, etc (especially Hinton and Frean’s). Sorry for not listing them in full detail. Dumitru Erhan, Aaron Courville, Yoshua Bengio. Understanding Representations Learned in Deep Architectures. Technical Report.

  4. Neural network Back propagation Deep belief net Science Speech 2006 1986 2011 2012 deep learning results • Unsupervised & Layer-wised pre-training • Better designs for modeling and training (normalization, nonlinearity, dropout) • Feature learning • New development of computer architectures • GPU • Multi-core computer systems • Large scale databases • Loose tie with biological systems • Shallow model • Specific methods for specific tasks • Hand crafted features (GMM-HMM, SIFT, LBP, HOG) • SVM • Boosting • Decision tree • KNN • … … … But it is given up… … … • Hard to train • Insufficient computational resources • Small training sets • Does not work well … … … … Object recognition over 1,000,000 images and 1,000 categories (2 GPU) How Many Computers to Identify a Cat? 16000 CPU cores Kruger et al. TPAMI’13 Solve general learning problems Tied with biological system

  5. Outline Short introduction on deep learning Short introduction on statistical models and Graphical model Restricted Boltzmann Machine (RBM) and Contrastive divergence Deep belief net (DBN)

  6. Graphical model for Statistics C Smoker? B A Has Lung cancer Has bronchitis 支气管炎 肺癌 • Conditional independence between random variables • Given C, A and B are independent: • P(A, B|C) = P(A|C)P(B|C) • P(A,B,C) =P(A, B|C)P(C) • =P(A|C)P(B|C)P(C) • Any two nodes are conditionally independent given the values of their parents. http://www.eecs.qmul.ac.uk/~norman/BBNs/Independence_and_conditional_independence.htm

  7. Directed and undirected graphical model C B A C B A C C B A B A • Directed graphical model • P(A,B,C) = P(A|C)P(B|C)P(C) • Any two nodes are conditionally independent given the values of their parents. • Undirected graphical model • P(A,B,C) = P(B,C)P(A,C) • Also called Marcov Random Field (MRF) • P(A,B,C,D) = P(D|A,B)P(B|C)P(A|C)P(C) D

  8. Modeling undirected model Probability: partition function Is smoker? Example: P(A,B,C) = P(B,C)P(A,C) C w2 w1 A B Is healthy Has Lung cancer

  9. More directed and undirected models A B C y1 y2 y3 D E F h1 h2 h3 G H I Hidden Marcov model MRF in 2D

  10. More directed and undirected models A B y1 y2 y3 C h1 h2 h3 D P(y1, y2, y3, h1, h2, h3)=P(h1)P(h2| h1) P(h3| h2) P(y1| h1)P(y2| h2)P(y3| h3) P(A,B,C,D)=P(A)P(B)P(C|B)P(D|A,B,C)

  11. More directed and undirected models

  12. Extended reading on graphical model Zoubin Ghahramani ‘s video lecture on graphical models: http://videolectures.net/mlss07_ghahramani_grafm/

  13. Outline A training algorithm for • Short introduction on deep learning • Short introduction on statistical models and Graphical model • Restricted Boltzmann machine and Contrastive divergence • Product of experts • Contrastive divergence • Restricted Boltzmann Machine • Deep belief net

  14. Outline A specific, useful case of • Short introduction on deep learning • Short introduction on statistical models and Graphical model • Restricted Boltzmann machine and Contrastive divergence • Product of experts • Contrastive divergence • Restricted Boltzmann Machine • Deep belief net

  15. Outline • Short introduction on deep learning • Short introduction on statistical models and Graphical model • Restricted Boltzmann machine and Contrastive divergence • Product of experts • Contrastive divergence • Restricted Boltzmann Machine • Deep belief net

  16. Product of Experts Partition function Energy function A B C D E F MRF in 2D G H I

  17. Product of Experts

  18. Products of experts versus Mixture model • Products of experts : • "and" operation • Sharper than mixture • Each expert can constrain a different subset of dimensions. • Mixture model, e.g. Gaussian Mixture model • “or” operation • a weighted sum of many density functions

  19. Outline • Basic background on statistical learning and Graphical model • Contrastive divergence and Restricted Boltzmann machine • Product of experts • Contrastive divergence • Restricted Boltzmann Machine • Deep belief net

  20. Contrastive Divergence (CD) Probability: Maximum Likelihood and gradient descent model dist. data dist. expectation

  21. Contrastive Divergence (CD) Gradient of Likelihood: • P(A,B,C) = P(A|C)P(B|C)P(C) C B A Intractable Easy to compute Tractable Gibbs Sampling Fast contrastive divergence T=1 Sample p(z1,z2,…,zM) CD Minimum Accurate but slow gradient Approximate but fast gradient

  22. Gibbs Sampling for graphical model h1 h5 h2 h3 h4 x1 x2 x3 More information on Gibbs sampling: Pattern recognition and machine learning(PRML)

  23. Convergence of Contrastive divergence (CD) The fixed points of ML are not fixed points of CD and vice versa. CD is a biased learning algorithm. But the bias is typically very small. CD can be used for getting close to ML solution and then ML learning can be used for fine-tuning. It is not clear if CD learning converges (to a stable fixed point). At 2005, proof is not available. Further theoretical results? Please inform us M. A. Carreira-Perpignan and G. E. Hinton. On Contrastive Divergence Learning. Artificial Intelligence and Statistics, 2005

  24. Outline • Basic background on statistical learning and Graphical model • Contrastive divergence and Restricted Boltzmann machine • Product of experts • Contrastive divergence • Restricted Boltzmann Machine • Deep belief net

  25. Boltzmann Machine Undirected graphical model, with hidden nodes. Boltzmann machine: E(x,h)=b' x+c' h+h' Wx+x’Ux+h’Vh

  26. Restricted Boltzmann Machine (RBM) Undirected, loopy, layer E(x,h)=b' x+c' h+h' Wx Boltzmann machine: E(x,h)=b' x+c' h+h' Wx+x’Ux+h’Vh h1 h2 h3 h4 h5 partition function x1 x2 x3 h W x P(xj = 1|h) = σ(bj+W’• j · h) Read the manuscript for details P(hi = 1|x) = σ(ci+Wi· · x)

  27. Restricted Boltzmann Machine (RBM) E(x,h)=b'x+c' h+h' Wx x = [x1x2 …]T,h = [h1h2 …]T Parameter learning Maximum Log-Likelihood Geoffrey E. Hinton, “Training Products of Experts by Minimizing Contrastive Divergence.” Neural Computation 14, 1771–1800 (2002)

  28. CD for RBM CD for RBM, very fast! CD P(xj = 1|h) = σ(bj +W’• j · h) P(hi = 1|x) = σ(ci +Wi · x)

  29. CD for RBM P(xj = 1|h) = σ(bj +W’• j · h) P(hi = 1|x) = σ(ci +Wi · x) P(xj = 1|h) = σ(bj +W’• j · h) h2 x1 x2 P(xj = 1|h) = σ(bj +W’• j · h) P(hi = 1|x) = σ(ci +Wi · x) h1

  30. RBM for classification y: classification label Hugo Larochelle and Yoshua Bengio, Classification using Discriminative Restricted Boltzmann Machines, ICML 2008.

  31. RBM itself has many applications Multiclass classification Collaborative filtering Motion capture modeling Information retrieval Modeling natural images Segmentation Y Li, D Tarlow, R Zemel, Exploring compositional high order pattern potentials for structured output learning, CVPR 2013 V. Mnih, H Larochelle, GE Hinton , Conditional Restricted Boltzmann Machines for Structured Output Prediction, Uncertainty in Artificial Intelligence, 2011. Larochelle, H., & Bengio, Y. (2008). Classification using discriminative restricted boltzmann machines. ICML, 2008. Salakhutdinov, R., Mnih, A., & Hinton, G. E. (2007). Restricted Boltzmann machines for collaborative filtering. ICML 2007. Salakhutdinov, R., & Hinton, G. E. (2009). Replicated softmax: an undirected topic model., NIPS 2009. Osindero, S., & Hinton, G. E. (2008). Modeling image patches with a directed hierarchy of markov random field., NIPS 2008

  32. Outline • Basic background on statistical learning and Graphical model • Contrastive divergence and Restricted Boltzmann machine • Deep belief net (DBN) • Why deep leaning? • Learning and inference • Applications

  33. Belief Nets A belief net is a directed acyclic graph composed of random variables. random hidden cause visible effect

  34. Deep Belief Net h3 … … h2 … … … … h1 … … x Pixels=>edges=> local shapes=> object parts P(x,h1,h2,h3) = p(x|h1) p(h1|h2) p(h2,h3) • Belief net that is deep • A generative model • P(x,h1,…,hl) = p(x|h1) p(h1|h2)… p(hl-2|hl-1) p(hl-1,hl) • Used for unsupervised training of multi-layer deep model.

  35. Why Deep learning? Pixels=>edges=> local shapes=> object parts T. Serre, etc., “A quantitative theory of immediate visual recognition,” Progress in Brain Research, Computational Neuroscience: Theoretical Insights into Brain Function, vol. 165, pp. 33–56, 2007. Yoshua Bengio, “Learning Deep Architectures for AI,” Foundations and Trends in Machine Learning, 2009. The mammal brain is organized in a deep architecture with a given input percept represented at multiple levels of abstraction, each level corresponding to a different area of cortex(脑或其他器官的皮层). An architecture with insufficient depth can require many more computational elements, potentially exponentially more (with respect to input size), than architectures whose depth is matched to the task. Since the number of computational elements one can afford depends on the number of training examples available to tune or select them, the consequences are not just computational but also statistical: poor generalization may be expected when using an insufficiently deep architecture for representing some functions.

  36. Why Deep learning? Yoshua Bengio, “Learning Deep Architectures for AI,” Foundations and Trends in Machine Learning, 2009. Linear regression, logistic regression: depth 1 Kernel SVM: depth 2 Decision tree: depth 2 Boosting: depth 2 The basic conclusion that these results suggest is that when a function can be compactly represented by a deep architecture, it might need a very large architecture to be represented by an insufficiently deep one. (Example: logic gates, multi-layer NN with linear threshold units and positive weight).

  37. Example: sum product network (SPN) 2N-1                  N2N-1 parameters      X1 X2 X1 X4 X5 X2 X3 X4 X3 X5 O(N) parameters

  38. Depth of existing approaches • Boosting (2 layers) • L 1: base learner • L 2: vote or linear combination of layer 1 • Decision tree, LLE, KNN, Kernel SVM (2 layers) • L 1: matching degree to a set of local templates. • L 2: Combine these degrees • Brain: 5-10 layers

  39. Why decision tree has depth 2? Rely on partition of input space. Local estimator. Rely on partition of input space and use separate params for each region. Each region is associated with a leaf. Need as many as training samples as there are variations of interest in the target function. Not good for highly varying functions. Num. training sample is exponential to Num. dim in order to achieve a fixed error rate.

  40. Outline • Basic background on statistical learning and Graphical model • Contrastive divergence and Restricted Boltzmann machine • Deep belief net (DBN) • Why DBN? • Learning and inference • Applications

  41. Deep Belief Net h3 … … h2 … … … … h1 … … x Inference problem: Infer the states of the unobserved variables. Learning problem: Adjust the interactions between variables to make the network more likely to generate the observed data P(x,h1,h2,h3) = p(x|h1) p(h1|h2) p(h2,h3)

  42. Deep Belief Net C • P(A,B|C) = P(A|C)P(B|C) B A = • P(h11, h12| x1) ≠ P(h11| x1) P(h12 | x1) h11 h12 h1 … … x1 … … x An example from manuscript Sol: Complementary prior • Inference problem (the problem of explaining away):

  43. Deep Belief Net • Inference problem (the problem of explaining away) • Sol: Complementary prior … … h4 … … 30 h3 500 h2 … … 1000 … … 2000 h1 … … x Sol: Complementary prior

  44. Deep Belief Net P(hi = 1|x) = σ(ci +Wi · x) … … h3 h3 … … … … h2 h2 … … … … h2 … … h1 … … h1 … … h1 … … x • Explaining away problem of Inference (see the manuscript) • Sol: Complementary prior, see the manuscript • Learning problem • Greedy layer by layer RBM training (optimize lower bound) and fine tuning • Contrastive divergence for RBM training … … x

  45. Code reading It is much easier to read the DeepLearningToolbox for understanding DBN.

  46. Deep Belief Net Why greedy layerwise learning work? Optimizing a lower bound: When we fix parameters for layer 1 and optimize the parameters for layer 2, we are optimizing the P(h1) in (1) (1) … … h3 … … h2 … … h2 … … h1 … … h1 … … x

  47. Deep Belief Net and RBM RBM can be considered as DBN that has infinitive layers … … … x2 … … … … h1 h0 … … … … x1 x0 … … h0 … … x0

  48. Pretrain, fine-tune and inference – (autoencoder) (BP)

  49. Pretrain, fine-tune and inference - 2 y: identity or rotation degree Pretraining Fine-tuning

More Related