1 / 30

Generalization Error of Linear Neural Networks in an Empirical Bayes Approach

Generalization Error of Linear Neural Networks in an Empirical Bayes Approach. Shinichi Nakajima Sumio Watanabe Tokyo Institute of Technology Nikon Corporation. Contents. Backgrounds Regular models Unidentifiable models Superiority of Bayes to ML What’s the purpose? Setting Model

odelia
Télécharger la présentation

Generalization Error of Linear Neural Networks in an Empirical Bayes Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generalization Error of Linear Neural Networks in an Empirical Bayes Approach Shinichi Nakajima Sumio Watanabe Tokyo Institute of Technology Nikon Corporation

  2. Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions

  3. Regular models Everywhere det (Fisher Information) > 0 - Mean estimation - Linear regression (Asymptotically) normal likelihood for ANY true parameter Regular Models Conventional Learning Theory K: dimensionality of parameter space n: # of samples x: input y: output 1. Asymptotic normalities of distribution of ML estimator andBayes posterior GE: FE: Model selection methods (AIC, BIC, MDL) 2. Asymptotic generalization error l(ML) = l(Bayes)

  4. Unidentifiable models Exist singularities, where det (Fisher Information) = 0 - Neural networks - Bayesian networks - Mixture models - Hidden Markov models Unidentifiable set : NON-normal likelihoodwhen true is on singularities. Unidentifiable models H: # of components 1. Asymptotic normalities NOT hold. No (penalized likelihood type) information criterion.

  5. Unidentifiable models Exist singularities, where det (Fisher Information) = 0 - Neural networks - Bayesian networks - Mixture models - Hidden Markov models In ML, In Bayes, Superiority of Bayes to ML How singularities work in learning ? When true is on singularities, Increase of neighborhood of true accelerates overfitting. Increase of population denoting true suppresses overfitting. (only in Bayes) 1. Asymptotic normalities NOT hold. No (penalized likelihood type) information criterion. 2. Bayes has advantage G(Bayes) < G(ML)

  6. What’s the purpose ? • Bayes provides good generalization. • Expensive. (Needs Markov chain Monte Carlo) Is there any approximation with good generalization and tractability? • Variational Bayes (VB) [Hinton&vanCamp93; MacKay95; Attias99;Ghahramani&Beal00] Analyzed in another paper. [Nakajima&Watanabe05] • Subspace Bayes (SB)

  7. Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions

  8. Trivial redundancy True map: B*A*with rankH* ( ). learner true Linear Neural Networks(LNNs) LNN with M input, N output, and H hidden units: A : input parameter (H x M ) matrix B : output parameter (N x H ) matrix Essential parameter dimensionality:

  9. Maximum Likelihood estimator [Baldi&Hornik95] ML estimator is given by where Here : h-th largest singular value of RQ -1/2. : right singular vector. : left singular vector.

  10. ntraining samples Marginal likelihood : Posterior : Predictive : Bayes estimation : input : output : parameter True Learner Prior In ML (or MAP) : Predict with one model In Bayes : Predict with ensemble of models

  11. Hyperparameter : ntraining samples Marginal likelihood : Posterior : Predictive : Empirical Bayes (EB) approach[Effron&Morris73] True Learner Prior Hyperparameter is estimated by maximizing marginal likelihood.

  12. Learner : Prior : Subspace Bayes (SB) approach SB is an EB where part of parameters are regarded as hyperparameters. a) MIP (Marginalizing in Input Parameter space) version A : parameter B : hyperparameter b) MOP (Marginalizing in Output Parameter space) version A : hyperparameter B : parameter Marginalization can be done analytically in LNNs.

  13. Intuitive explanation Bayesposterior SB posterior For redundant comp. Optimize

  14. Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions

  15. Free energy (a.k.a. evidence, stochastic complexity) Free energy : Important variable used for model selection. [Akaike80;Mackay92] We minimize the free energy, optimizing hyperparameter.

  16. : generalization coefficient Generalization error Generalization Error : : Kullbuck-Leibler divergence between q & p where : Expectation of V over q Asymptotic expansion : In regular, In unidentifiable,

  17. : ML estimator (arithmetic mean) ML is efficient (never dominated by any unbiased estimator),but is inadmissible (dominated by biased estimator) when [Stein56]. ML JS (K=3) true mean James-Stein (JS) estimator for any true Domination of a over b : for a certain true K-dimensional mean estimation (Regular model) A certain relation between EB and JSwas discussed in [Efron&Morris73] : samples James-Stein estimator [James&Stein61]

  18. where Positive-part JS estimator Positive-part JS type (PJS) estimator where Thresholding Model selection PJS is a model selecting, shrinkage estimator.

  19. Hyperparameter optimization Assume orthonormality : : d x d identity matrix Analytically solved in LNNs! Optimum hyperparameter value :

  20. where SB solution (Theorem1, Lemma1) L : dimensionality of marginalized subspace (per component), i.e., L = M in MIP, or L = N in MOP. Theorem 1: The SB estimator is given by where Lemma 1: Posterior is localized so that we can substitute the model at the SB estimator for predictive. SB is asymptotically equivalent to PJS estimation.

  21. Generalization error (Theorem 2) Theorem 2: SB generalization coefficient is given by :h-th largest eigenvalue of matrix subject to WN-H* (M-H*, IN-H* ). Expectation over Wishart distribution.

  22. Large scale approximation(Theorem 3) Theorem 3: In the large scale limit when ,the generalization coefficient converges to where

  23. learner true Results 1 (true rank dependence) N = 30 ML M = 50 Bayes SB(MIP) SB(MOP) N = 30 M = 50 SB provides good generalization. Note : This does NOT mean domination of SB over Bayes. Discussion of domination needs consideration of delicate situation. (See paper)

  24. learner true Results 2(redundant rank dependence) N = 30 ML M = 50 Bayes SB(MOP) SB(MIP) N = 30 M = 50 depends on H similarly to ML.has also a property similar to ML.

  25. Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions

  26. Feature of SB • provides good generalization. • In LNNs, asymptotically equivalent to PJS. • requires smaller computational costs. • Reduction of marginalized space. • In some models, marginalization can be done analytically. • related to variational Bayes (VB) approach.

  27. Variational Bayes (VB) Solution [Nakajima&Watanabe05] • VB results in same solution as MIP. • VB automatically selects larger dimension to marginalize. For and Bayesposterior VB posterior Similar to SB posterior

  28. Conclusions • We have introduced a subspace Bayes (SB) approach. • We have proved that, in LNNs, SB is asymptotically equivalent to a shrinkage (PJS) estimation. • Even in asymptotics, SB for redundant components converges not to ML but to smaller value, which means suppression of overfitting. • Interestingly, MIP of SB is asymptotically equivalent to VB. • We have clarified the SB generalization error. • SB has Bayes-like and ML-like properties, i.e., shrinkage and acceleration of overfitting by basis selection.

  29. Future work • Analysis of other models. (neural networks, Bayesian networks, mixture models, etc). • Analysis of variational Bayes (VB) in other models.

  30. Thank you!

More Related