1 / 9

Inference in a Bayesian Network based on Stochastic Simulation

Inference in a Bayesian Network based on Stochastic Simulation. Assume that we want to compute P ( X 1 =T|X 4 =T, X 6 =T ).

licia
Télécharger la présentation

Inference in a Bayesian Network based on Stochastic Simulation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inference in a Bayesian Network based on Stochastic Simulation • Assume that we want to compute P(X1=T|X4=T, X6=T). • We start with a sequence of input patterns I1, I2, …, In. Each input pattern specifies a value for each root node. For example, I1 = {X1=F, X2=T, X3=T}. • In the input pattern sequence, the frequency of each possible random variable value must conforms with the prior probability. For example

  2. Let us use the pattern {X1=F, X2=T, X3=T} to illustrate the process. • We start from the root nodes and walk down. • When the parents of a node are all instantiated, the node is ready for being instantiated. • Since X2=T, X3=T, the probability that X5=T is 0.1. We can invoke a random number generator to give us a number between 0 and 1. If the number generated is less than 0.1, then we set X5=T. Otherwise, we set X5=F.

  3. Once all simulation runs have been completed, we calculate

  4. Learning in Bayesian Network • In general, people employ the cause-and-effect reasoning to determine the structure of a Bayesian network. • If the training samples contain observations at each node, then the learning process is straightforward.

  5. However, it is common that we do not have observations of all the nodes.Therefore, we want to find a setting of the conditional probability tables so that is maximized, whereis a set of training samples. • We assume that the training samples are collected from independent runs of the corresponding experiment. Therefore,

  6. Xi … • Since ln(x) is a monotonic increasing function, is maximized if and only if is maximized. • For each entry in the conditional probability tables that is yet to be determined, we compute k-th column where wijk is the entry at row j,column k of the conditionalprobability table of node Xi. j-th row

  7. Therefore, • Furthermore,

  8. The learning process begins wih an initial guess of all unknown wijk.For each unknown wijk, computeand set new • Normalize wijk by setting normalized • If each wijk converges, then terminate. Otherwise, report the process again.

More Related