1 / 23

Forward & Backward selection in hybrid network

Forward & Backward selection in hybrid network. Introduction. A training algorithm for an hybrid neural network for regression. Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons). When is it good?. Hidden Units. RBF:. MLP:. Overall algorithm.

ray-ramsey
Télécharger la présentation

Forward & Backward selection in hybrid network

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Forward & Backward selection in hybrid network

  2. Introduction • A training algorithm for an hybrid neural network for regression. • Hybrid neural network has hidden layer that has RBF or projection units (Perceptrons).

  3. When is it good?

  4. Hidden Units • RBF: • MLP:

  5. Overall algorithm • Divide input space and assign units to each sub-region. • Optimize parameters. • Prune un-necessary weights using Bayesian Information Criteria.

  6. Forward leg • Divide the input space into sub-regions • Select type of hidden unit for each sub-region • Stop when error goal or maximum number of units is achieved.

  7. Input space division • Like CART using • Maximum reduction in

  8. Unit type selection (RBF)

  9. Unit type selection (projection)

  10. Units parameters • RBF unit: center at maximum point. • Projection unit: weight normalized of maximum point

  11. ML estimate for unit type

  12. Pruning • Target function values corrupted with Gaussian noise

  13. BIC approximation • Schwartz, Kass and Raftery

  14. Evidence for the model

  15. Evidence for unit type1

  16. Evidence for unit type cont2’

  17. Evidence fore unit type cont3’

  18. Evidence Unit Type alg4. • Initialize alfa and beta • Loop: compute w,wo • Recompute alfa and beta • Until difference in the evidence is low.

  19. Pumadyn data set DELVE archive • Dynamic of a puma robot arm. • Target: annular acceleration of one of the links. • Inputs: various joint angles, velocities and torques. • Large Guassian noise. • Data set non linear. • Input dimension: 8, 32.

  20. Results pumadyn-32nh

  21. Results pumadyn-8nh

  22. Related work • Hassibi et al. with Optimal Brain Surgeon • Mackey with Bayesian inference of weights and regularization parameters. • HME Jordan and Jacob, division on input space. • Kass & Raftery Schwarz with BIC.

  23. Discussion • Pruning removes 90% of parameters. • Pruning reduces variance of estimator. • The pruning algorithm is slow. • PRBFN better then MLP of RBF alone. • Bayesian techniques disadvantage: the prior distribution parameter. • Bayesian techniques are better then LRT. • Unit type selection is a crucial element in PRBFN • Curse of dimensionality is well seen on pumadyn data sets.

More Related