80 likes | 199 Vues
This document explores the concepts of I-maps and perfect maps within the framework of probabilistic graphical models. It explains how a graph G can serve as an I-map for a distribution P by factoring its independencies, while noting that not all I-maps capture all independencies present in P. The aim is to design a sparse graph that encodes essential independencies, leading to more informative models. It further discusses the relationship between minimal I-maps, perfect maps, and the challenge of exact independence capture within Bayesian networks (BN) and Markov networks (MN).
E N D
Representation Probabilistic Graphical Models Independencies I-maps andperfect maps
Capturing Independencies in P • P factorizes over G G is an I-map for P: • But not always vice versa: there can be independencies in I(P) that are not in I(G)
Want a Sparse Graph • If the graph encodes more independencies • it is sparser (has fewer parameters) • and more informative • Want a graph that captures as much of the structure in P as possible
Minimal I-map • I-map without redundant edges D D I I G G
MN as a perfect map • Exactly capture the independencies in P: I(G) = I(P) or I(H) = I(P) D D I I G G
BN as a perfect map A A A A • Exactly capture the independencies in P: I(G) = I(P) or I(H) = I(P) D D D D B B B B C C C C
More imperfect maps • Exactly capture the independencies in P: I(G) = I(P) or I(H) = I(P) X1 X2 Y XOR
Summary • Graphs that capture more of I(P) are more compact and provide more insight • But it may be impossible to capture I(P) perfectly (as a BN, as an MN, or at all) • BN to MN: lose the v-structures • MN to BN: add edges to undirected loops