1 / 18

Li Deng Microsoft Research, Redmond

MLSLP-2012 Learning Deep Architectures Using Kernel Modules (thanks collaborations/discussions with many people). Li Deng Microsoft Research, Redmond. Outline. Deep neural net (“modern” multilayer perceptron) Hard to parallelize in learning  Deep Convex Net (Deep Stacking Net)

diane
Télécharger la présentation

Li Deng Microsoft Research, Redmond

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MLSLP-2012Learning Deep Architectures Using Kernel Modules(thanks collaborations/discussions with many people) Li Deng Microsoft Research, Redmond

  2. Outline • Deep neural net (“modern” multilayer perceptron) • Hard to parallelize in learning  • Deep Convex Net (Deep Stacking Net) • Limited hidden-layer size and part of parameters not convex in learning  • (Tensor DSN/DCN) and Kernel DCN • K-DCN: combines elegance of kernel methods and high performance of deep learning • Linearity of pattern functions (kernel) and nonlinearity in deep nets

  3. Deep Neural Networks

  4. Deep Stacking Network (DSN) • “Stacked generalization” in machine learning: • Use a high-level model to combine low-level models • Aim to achieve greater predictive accuracy • This principle has been reduced to practice: • Learning parameters in DSN/DCN (Deng & Yu, Interspeech-2011; Deng, Yu, Platt, ICASSP-2012) • Parallelizable, scalable learning (Deng, Hutchinson, Yu, Interspeech-2012)

  5. . DCN Architecture . . Example: L=3 10 • Many modules • Still easily trainable • Alternative linear & nonlinear sub-layers • Actual architecture for digit image recognition (10 classes) • MNIST: 0.83% error rate (LeCun’s MNIST site) 3000 10 784 3000 10 784 3000 784

  6. Anatomy of a Module in DCN targets 10 10 3000 U=pinv(h)t 10 784 h 3000 10 784 Wrand WRBM 3000 10 linear units 784 linear units x 784

  7. From DCN to Kernel-DCN

  8. Kernel-DCN • Replace each module by kernel ridge regression • , where • Symmetric kernel matrix in the kernel-feature space G typically in infinite dimension (e.g., Gaussian kernel) • Kernel trick: no need to compute G directly • Problem: expensive inverse when training size is large • Solutions: • Nystrom Woodbury approximation • Reduced rank kernel regression - Feature vector selection

  9. Nystrom Woodbury Approximation • Kernel ridge regression • , where • is large when number of training samples is large • Nystrom Woodbury Approximation • Sampling columns from ->: • Approximation • SVD of W: O() • Computation of : O() C

  10. Nystrom Woodbury Approximation • Eigenvalues of W, select 5000 samples enlarge

  11. Nystrom Woodbury Approximation • Kernel approximation error (5000 samples). • Frobenius norm • 2-norm

  12. K-DSN Using Reduced Rank Kernel Regression • Solution: feature vector selection • To identify a subset • , • Kernel can be written as a sparse kernel expansion involving only terms corresponding to the subset of the training data forming an approximate basis in the feature space. • Feature vector selection algorithm from “Feature vector selection and projection using kernels” (G. Baudat and F. Anouar)

  13. Computation Analysis (run time) • D dim input, M hidden units, Training data size: T, k outputs • , if there are sparse support vectors or tree VQ clusters

  14. K-DCN: Layer-Wise Regularization • Two hyper-parameters in each module • Tuning them using cross validation data • Relaxation at lower modules • Different regularization procedures • Lower-modules vs. higher modules

  15. SLT-2012 paper:

More Related