1 / 58

Nevin L. Zhang Room 3504, phone: 2358-7015, Email: lzhang@cst.hk Home page

THE HONG KONG UNIVERSITY OF SCIENCE & TECHNOLOGY CSIT 5220:  Reasoning and Decision under Uncertainty L07: Parameter Learning. Nevin L. Zhang Room 3504, phone: 2358-7015, Email: lzhang@cs.ust.hk Home page. Page 2. Overview of Course. Next

meg
Télécharger la présentation

Nevin L. Zhang Room 3504, phone: 2358-7015, Email: lzhang@cst.hk Home page

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. THE HONG KONG UNIVERSITY OF SCIENCE & TECHNOLOGYCSIT 5220:  Reasoning and Decision under Uncertainty L07: Parameter Learning Nevin L. ZhangRoom 3504, phone: 2358-7015, Email: lzhang@cs.ust.hkHome page

  2. Page 2 Overview of Course • Next • L07 Parameter learning: Estimate parameters from data • L08 Structure learning: Determine both structure and parameters from data • We have done: • Concept of Bayesian networks • D-Separation • Inference • Manual model building

  3. Page 3 L07: Parameter Learning

  4. Page 4 Outline • The MLE Principle • Parameter Learning from Complete Data • Missing Values • Parameter Learning from Incomplete data • Bayesian Parameter Learning • Reading: Jensen & Nielsen, Chapter 6; Zhang & Guo, Chapter 7

  5. Page 5

  6. Page 6

  7. Page 7

  8. Page 8

  9. Page 9 • So, m_h and m_t contain all the information that is necessary for computing the likelihood function. Other information about the data does not matter • Because of this, they are called sufficient statistics

  10. Page 10

  11. Page 11 Outline • The MLE Principle • Parameter Learning from Complete Data • Missing Values • Parameter Learning from Incomplete data • Bayesian Parameter Learning

  12. Single variable with multiple values

  13. MLE

  14. Page 14 The General Case

  15. Page 15

  16. Page 16

  17. Page 17

  18. Page 20

  19. Page 21

  20. Page 22

  21. Page 23

  22. Page 24 Outline • The MLE Principle • Parameter Learning from Complete Data • Missing Values • Parameter Learning from Incomplete data • Bayesian Parameter Learning

  23. Page 25

  24. Page 26

  25. Page 27

  26. Page 28 Outline • The MLE Principle • Parameter Learning from Complete Data • Missing Values • Parameter Learning from Incomplete data • Bayesian Parameter Learning

  27. Page 29

  28. Page 30

  29. Page 31

  30. Page 32

  31. Page 33

  32. Page 34

  33. Page 35 Can we implement the idea directly? • 1 incomplete data case becomes 2 partial data cases because the data case has 1 missing value? • What is a data case has 10 missing values, 100 missing values? • Exponential number of partial data cases • Fortunately, there is no need to explicitly complete the data. • Next: • Formalize the idea • Figure out exactly what to compute

  34. Formalizing the Idea

  35. Formalizing the Idea

  36. Formalizing the Idea

  37. It turns out…

  38. Page 40

  39. Page 41

  40. Page 42

  41. Page 43

  42. Page 44

  43. Page 45 Convergence of EM

  44. Page 46 • Need to run multiple times to avoid local maxima.

  45. Page 47 Outline • The MLE Principle • Parameter Learning from Complete Data • Missing Values • Parameter Learning from Incomplete data • Bayesian Parameter Learning

  46. Page 48

  47. Page 49

  48. Page 50

More Related