1 / 59

Finding Optimal Bayesian Networks with Greedy Search

Finding Optimal Bayesian Networks with Greedy Search. Max Chickering. Outline. Bayesian-Network Definitions Learning Greedy Equivalence Search (GES) Optimality of GES. Bayesian Networks. Use B = ( S , q ) to represent p(X 1 , …, X n ) . Markov Conditions.

oleg
Télécharger la présentation

Finding Optimal Bayesian Networks with Greedy Search

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Finding Optimal Bayesian Networks with Greedy Search Max Chickering

  2. Outline • Bayesian-Network Definitions • Learning • Greedy Equivalence Search (GES) • Optimality of GES

  3. Bayesian Networks Use B = (S,q) to represent p(X1, …, Xn)

  4. Markov Conditions From factorization: I(X, ND | Par(X)) ND Par Par Par X Desc ND Desc Markov Conditions + Graphoid Axioms characterize all independencies

  5. Structure/Distribution Inclusion p is included in S if there exists q s.t. B(S,q) defines p All distributions p X Y Z S

  6. Structure/Structure Inclusion T ≤ S T is included in S if every p included in T is included in S All distributions X Y Z X Y Z S T (S is an I-map of T)

  7. Structure/Structure EquivalenceT  S All distributions X Y Z X Y Z S T Reflexive, Symmetric, Transitive

  8. Equivalence A B C A B C D D Skeleton V-structure Theorem (Verma and Pearl, 1990) ST same v-structures and skeletons

  9. Learn the structure Estimate the conditional distributions Learning Bayesian Networks X X Y Z 0 1 1 1 0 1 0 1 0 . . . 1 0 1 iid samples Y p* Z Generative Distribution Observed Data Learned Model

  10. Learning Structure • Scoring criterion F(D, S) • Search procedure Identify one or more structures with high values for the scoring function

  11. Properties of Scoring Criteria • Consistent • Locally Consistent • Score Equivalent

  12. X Y Z p* X Y Z X Y Z X Y Z Consistent Criterion Criterion favors (in the limit) simplest model that includes the generative distribution p* S includes p*, T does not include p*  F(S,D) > F(T,D) Both include p*, S has fewer parameters  F(S,D) > F(T,D)

  13. Locally Consistent Criterion S and T differ by one edge: X Y X Y S T If I(X,Y|Par(X)) in p*then F(S,D) > F(T,D) Otherwise F(S,D) < F(T,D)

  14. Score-Equivalent Criterion Y X S Y X T ST F(S,D) = F(T,D)

  15. Bayesian Criterion(Consistent, locally consistent and score equivalent) Sh : generative distribution p* has same independence constraints as S. FBayes(S,D) = log p(Sh |D) = k + log p(D|Sh) + log p(Sh) Structure Prior (e.g. prefer simple) Marginal Likelihood (closed form w/ assumptions)

  16. Search Procedure • Set of states • Representation for the states • Operators to move between states • Systematic Search Algorithm

  17. Greedy Equivalence Search • Set of states Equivalence classes of DAGs • Representation for the states Essential graphs • Operators to move between states Forward and Backward Operators • Systematic Search Algorithm Two-phase Greedy

  18. Representation: Essential Graphs A B C Compelled Edges Reversible Edges D E F A B C D E F

  19. GES Operators Forward Direction – single edge additions Backward Direction – single edge deletions

  20. Two-Phase Greedy Algorithm • Phase 1: Forward Equivalence Search (FES) • Start with all-independence model • Run Greedy using forward operators • Phase 2: Backward Equivalence Search (BES) • Start with local max from FES • Run Greedy using backward operators

  21. Forward Operators • Consider all DAGs in the current state • For each DAG, consider all single-edge additions (acyclic) • Take the union of the resulting equivalence classes

  22. A B A B A B C C C A A B B A B A A B B A B C C C C C C A B A A B B A B A A B B C C C C C C Forward-Operators Example Current State: All DAGs: All DAGs resulting from single-edge addition: Union of corresponding essential graphs:

  23. A B C A B A B C C A B A B C C Forward-Operators Example

  24. Backward Operators • Consider all DAGs in the current state • For each DAG, consider all single-edge deletions • Take the union of the resulting equivalence classes

  25. A B A A B B C C C A B C A B A B C C Backward-Operators Example Current State: All DAGs: All DAGs resulting from single-edge deletion: A B A B A B A B A B A B C C C C C C Union of corresponding essential graphs:

  26. A B C A B A B C C Backward-Operators Example

  27. DAG Perfect DAG-perfect distribution p Exists DAG G: I(X,Y|Z) in p I(X,Y|Z) in G Non-DAG-perfect distribution q A B A B A B C D C D C D I(A,D|B,C) I(B,C|A,D) I(B,C|A,D) I(A,D|B,C)

  28. DAG-Perfect Consequence: Composition Axiom Holds in p* If I(X,Y | Z) then I(X,Y | Z) for some singleton Y  Y A B C D C X X

  29. Optimality of GES If p* is DAG-perfect wrt some G* X X X X Y Z 0 1 1 1 0 1 0 1 0 . . . 1 0 1 Y Y Y n iid samples GES Z Z Z G* S* S p* For large n, S = S*

  30. Optimality of GES BES FES State includes S* State equals S* All-independence • Proof Outline • After first phase (FES), current state includes S* • After second phase (BES), the current state = S*

  31. FES Maximum Includes S* Assume: Local Max does NOT include S* Any DAG G from S Markov Conditions characterize independencies: In p*, exists X not indep. non-desc given parents A B C  I(X,{A,B,C,D} | E) in p* D E X p* is DAG-perfect  composition axiom holds A B C  I(X,C | E) in p* D E X Locally consistent: adding CX edge improves score, and EQ class is a neighbor

  32. BES Identifies S* • Current state always includes S*: Local consistency of the criterion • Local Minimum is S*: Meek’s conjecture

  33. Meek’s Conjecture Any pair of DAGs G,H such that H includes G (G≤H) There exists a sequence of • covered edge reversals in G (2) single-edge additions to G after each change G≤H after all changes G=H

  34. Meek’s Conjecture A B I(A,B) I(C,B|A,D) C D H A B A B A B A B C D C D C D C D G

  35. Meek’s Conjecture and BESS*≤S Assume: Local Max S Not S* Any DAG H from S Any DAG G from S* Add Rev Rev Add Rev G H

  36. Meek’s Conjecture and BESS*≤S Assume: Local Max S Not S* Any DAG H from S Any DAG G from S* Add Rev Rev Add Rev G H Del Rev Rev Del Rev G H

  37. Meek’s Conjecture and BESS*≤S Assume: Local Max S Not S* Any DAG H from S Any DAG G from S* Add Rev Rev Add Rev G H Del Rev Rev Del Rev G H S* Neighbor of S in BES S

  38. Discussion Points • In practice, GES is as fast as DAG-based search Neighborhood of essential graphs can be generated and scored very efficiently • When DAG-perfect assumption fails, we still get optimality guarantees As long as composition holds in generative distribution, local maximum is inclusion-minimal

  39. Thanks! My Home Page: http://research.microsoft.com/~dmax Relevant Papers: “Optimal Structure Identification with Greedy Search” JMLR Submission Contains detailed proofs of Meek’s conjecture and optimality of GES “Finding Optimal Bayesian Networks” UAI02 Paper with Chris Meek Contains extension of optimality results of GES when not DAG perfect

  40. Bayesian Criterion is Locally Consistent • Bayesian score approaches BIC + constant • BIC is decomposible: • Difference in score same for any DAGS that differ by YX edge if X has same parents X Y X Y Complete network (always includes p*)

  41. Bayesian Criterion is Consistent • Assume Conditionals: • unconstrained multinomials • linear regressions Geiger, Heckerman, King and Meek (2001) Network structures = curved exponential models Haughton (1988) Bayesian Criterion is consistent

  42. Bayesian Criterion isScore Equivalent ST F(S,D) = F(T,D) Y X Sh: no independence constraints S Y X Th: no independence constraints T Sh = Th

  43. Active Paths • Z-active Path between X and Y: (non-standard) • Neither X nor Y is in Z • Every pair of colliding edges meets at a member of Z • No other pair of edges meets at a member of Z X Z Y G ≤ H If Z-active path between X and Y in G then Z-active path between X and Y in H

  44. A B C Active Paths X A Z W B Y • X-Y: Out-ofX and In-toY • X-W Out-of both X and W • Any sub-path between A,BZ is also active • A – B, B–C, at least one is out-ofB • Active path between A and C

  45. Simple Active Paths contains YX B A Then  active path (1) Edge appears exactly once OR A Y X B (2) Edge appears exactly twice A Y X X Y B Simplify discussion: Assume (1) only – proofs for (2) almost identical

  46. Typical Argument:Combining Active Paths A X Y B X Y Z sink node adj X,Y Z G Z H A X Y B A X G≤H Y B Z G’ : Suppose AP in G’ (X not in CS) with no corresp. AP in H. Then Z not in CS.

  47. Proof Sketch Two DAGs G, H with G<H Identify either: • a covered edge XY in G that has opposite orientation in H • a new edge XY to be added to G such that it remains included in H

  48. The Transformation Choose any node Y that is a sink in H Case 1a: Y is a sink in G X ParH(Y) X  ParG(Y) Case 1b: Y is a sink in G same parents Case 2a: X s.t. YX covered Case 2b: X s.t. YX & W par of Y but not X Case 2c: Every YX, Par(Y)  Par(X) Y X Y X Y Y X Y X W W Y X Y X Y Y

  49. Preliminaries (G≤ H) • The adjacencies in G are a subset of the adjacencies in H • If XYZ is a v-structure in G but not H, then X and Z are adjacent in H • Any new active path that results from adding XY to G includes XY

More Related