680 likes | 812 Vues
Belief Augmented Frames 14 June 2004. Colin Tan ctank@comp.nus.edu.sg http://www.comp.nus.edu.sg/~ctank. Motivation. Primary Objective: To study how uncertain and defeasible knowledge may be integrated into a knowledge base. Main Deliverable:
E N D
Belief Augmented Frames14 June 2004 Colin Tan ctank@comp.nus.edu.sg http://www.comp.nus.edu.sg/~ctank
Motivation • Primary Objective: • To study how uncertain and defeasible knowledge may be integrated into a knowledge base. • Main Deliverable: • A system of theories and techniques that allow us to integrate new knowledge we have gained, and to use this knowledge to make better inferences
Proposed Solution • A frame-based reasoning system augmented with belief measures. • Frame-based system to structure knowledge and relations between entities. • Belief measures provide uncertain reasoning on existence of entities and the relationships between them.
Why Belief Measures? • Statistical Measures • Standard tool for modeling uncertainty. • Essentially, if the probability that a proposition E is true is p, then the probability of that E is false is 1-p. • P(E) = p • P(not E) = 1-p
Why Belief Measures? • This relationship between P(E) and P(not E) introduces a problem: • This relationship essentially leaves no room for ignorance. Either the proposition is true with a probability of p, or it is false with a probability of 1-p. • This can be counter-intuitive at times.
Why Belief Measures? • [Shortliffe75] cites a study in which, given a set of symptoms, doctors were willing to declare with certainty x that a patient was suffering from a disease D, yet were unwilling to declare with certainty 1-x that the patient was not suffering from D.
Why Belief Measures? • To allow for ignorance our research focuses on belief measures. • The ability to model ignorance is inherent in belief systems. • E.g. in Dempster-Shafer Theory [Dempster67], if our belief in E1 and E2 are 0.1 and 0.3 respectively, then the ignorance is (1 – (0.1 + 0.3)) = 0.6.
Why Frames? • Frames are a powerful form of representation. • Intuitively represents relationships between objects using slot-filler pairs. • Simple to perform reasoning based on relationships. • Hierarchical • Can perform generalizations to create general models derived from a set of frames.
Why Frames? • Frames are powerful form of representation: • Daemons • Small programs that are invoked when a frame is instantiated or when a slot is filled.
Combining Frames with Uncertainty Measures • Augmenting slot-value pairs with uncertainty values. • Enhance expressiveness of relationships. • Can now do reasoning using the uncertainty values. • A Belief Augmented Frame (BAF) is a frame structure augmented with belief measures.
Belief Representation in Belief Augmented Frames • Beliefs are represented by two masses: • φT: Belief mass supporting a proposition. • φF: Belief mass refuting a proposition. • In general φT + φF 1 • Room to model ignorance of the facts. • Separate belief masses allow us to: • Draw φTand φFfrom different sources. • Have different chains of reasoning for φT and φF.
Belief Representation in Belief Augmented Frames • This ability to derive the refuting masses from different sources and chains of reasoning is unique to BAF. • In Probabilistic Argumentation Systems (the closest competitor to BAF) for example, p(not E) = 1 – p(E). • Possible though to achieve this in Dempster Shafer Theory through the underlying mechanisms generating m(E) and m(not E).
Belief Representation in Belief Augmented Frames • BAFs however give a formal framework for deriving T and F • BAF-Logic, a complete reasoning system for BAFs. • BAFs provide a formal framework for Frame operations. • E.g. how to generalize from a given set of frames. • BAF and DST can in fact be complementary: • BAF as a basis of generating masses in DST
Degree of Inclination • The Degree of Inclination is defined as: • DI = T - F • DI is in the range of [-1, 1]. • One possible interpretation of DI:
Utility Value • The Degree of Inclination DI can be re-mapped to the range [0, 1] through the Utility function: • U = (DI + 1) / 2 • By normalizing U across all relevant propositions it becomes possible to use U as a statistical measure.
Plausibility, Ignorance, Evidential Interval • Plausibility pl is defined as: pl = 1 - F • Ignorance ig is defined as: ig = pl – T = 1 – (T + F) • The Evidential Interval EI is defined to be the range EI =[T, pl]
Reasoning with BAFs • Belief Augmented Frame Logic, or BAF-Logic, is used for reasoning with BAFs. • Throughout the remainder of this presentation, we will consider two propositions A and B, with supporting and refuting masses TA, FA, TB, and FB.
Reasoning with BAFs AND, OR, NOT • A B: • TA B = min(TA, TB) • FA B = max(FA, FB) • A B: • TA B = max(TA, TB) • FA B = min(FA, FB) • A: • T A = F A • F A = T A
Default Reasoning in BAF • When the truth of a proposition is unknown, then we set the supporting and refuting masses to TDEF and FDEF respectively. • Conventionally, TDEF = FDEF = 0 • Two special default values: • TONE = 1, FONE = 0 • TZERO= 0, FZERO = 1 • Used for defining contradiction and tautology.
Default Reasoning in BAF • Other default reasoning models are possible too. • E.g. categorical defaults: • : (A, TA , FA) (B, TB , FB) / (B, TB , FB) • Semantics: • Given a knowledge base KB. • If KB :- A and KB :-/- B, infer B with supporting and refuting masses TBand FB • Detailed study of this topic still to be made.
BAF and Propositional Logic • BAF-Logic properties that are identical to Propositional Logic: • Associativity, Commutativity, Distributivity, Idempotency, Absorption, De-Morgan’s Theorem, - elimination.
BAF and Propositional Logic • Other properties of Propositional Logic work slightly differently in BAF-Logic. • In particular, some of the properties hold true only if the constituent propositions are at least “probably true” or “probably false” • I.e. |DIP | 0.5
BAF and Propositional Logic • For example, P and P Q must both be at least probably true for Q to not be false. • If DIPand DIP Qare less than 0.5, DIQmight end up < 0. • For - elimination, P Q must be probably true, and P must be probably false, before we can infer that Q is not false.
BAF and Propositional Logic • This can lead to unexpected reasoning results. • E.g. P, P Q are not false, yet DIQ < 0. • A possible solution is to set {TQ = TDEF , FQ = FDEF} when DIPand DIPQare less than 0.5 • In actual fact, the magnitude of DIPand DIP Qdon’t both have to be 0.5. Only their average magnitudes must be 0.5.
Belief Revision • Beliefs are not static. We need a mechanism to update beliefs [Pollock00]. • To track the revision of belief masses, we add a subscript t to time-stamp the masses. • E.g. TP,0 is the value of TPat time 0, TP,1at time 1 etc. • At time t, given a proposition P with masses TP, t and FP,t, suppose we derive masses TP, * and FP, *, then the new belief masses at time t+1 are: • TP, t+1 = TP, t + (1- ) TP, * • FP, t+1 = FP, t + (1- ) FP, *
Belief Revision • Intuitively, this means that we give a credibility factor to the existing masses, and (1- ) to the derived masses. • therefore controls the rate at which beliefs are revised, given new evidence.
An Example • Given the following propositions in your knowledge base: • KB = {(A, 0.7, 0.2), (B, 0.9, 0.1), (C, 0.2, 0.7), (A B R, TONE , FONE,), (A BR, TONE , FONE)} • We want to derive TR, 1, FR, 1.
An Example • Combining our clauses regarding R, we obtain: • R = (A B) (A B) • = A B ( A B) • With De-Morgan’s Theorem we can derive R: • R= A B (A B)
An Example • TR,* = min(TA , TB , max(FA , TB )) = min(0.7, 0.9, max(0.2, 0.9)) = min(0.7, 0.9, 0.9) = 0.7 • FR,* = max(FA , FB , min(TA , FB )) = max(0.2, 0.1, min(0.7, 0.1)) = max(0.2, 0.1, 0.1) = 0.2
An Example • We begin with default values for R: • TR,0 = TDEF = 0.0 • FR,0 = FDEF = 0.0 • This gives us the following attributes:
An Example • Deriving the new belief values with = 0.4 • TR,1 = 0.4 * 0.0 + (1.0 – 0.4) * 0.7 = 0.42 • FR,1 = 0.4 * 0.0 + (1.0 – 0.4) * 0.2 = 0.12 • This gives us:
An Example • We see that with our new information about R, our ignorance falls from 1.0 (total ignorance) to 0.46. With more knowledge available about whether R is true, we also see the plausibility falling from 1.0 to 0.88. • Further, suppose it is now known that: • B C R
An Example • Combining our clauses regarding R, we obtain: • R = (A B) (B C) (A B) = A B C ( A B) • With De-Morgan’s Theorem we can derive R: • R= A B C (A B)
An Example • TR,* = min(TA , TB , TC , max(FA , TB )) = min(0.7, 0.9, 0.2, max(0.2, 0.9)) = min(0.7, 0.9, 0.2, 0.9) = 0.2 • FR,* = max(FA , FB , FC , min(TA , FB )) = max(0.2, 0.1, 0.7, min(0.7, 0.1)) = max(0.2, 0.1, 0.7, 0.1) = 0.7
An Example • Updating the beliefs: • TR,2 = 0.4 * 0.42 + (1.0 – 0.4) * 0.2 = 0.288 • FR,2 = 0.4 * 0.12 + (1.0 – 0.4) * 0.7 = 0.468 • This gives us:
An Example • Here the new evidence that B C R fails to support R, because C is not true (DIC = -0.5) • Hence the plausibility of R falls from 0.88 to 0.532, while the truth value DIR,2enters into the negative range.
Integrating Belief Measures with Frames • Belief measures to quantify: • The existence of the object/concept represented by the frame. • The existence of relations between frames
Integrating Belief Measures with Frames • Deriving Belief Values • BAF-Logic statements can be used to derive belief measures. • For example, suppose we propose that: • Sam is Bob’s son if Sam is male and Bob has a child. • Within our knowledge base, we have {(Sam is male, 0.6, 0.2), (Bob has child, 0.8, 0.1), (Sam is male Bob has child Sam is Bob’s Son, 0.7, 0.1)}
Integrating Belief Measures with Frames • Assuming that = 0, we can derive: Tsam,son,bob = min(0.6, 0.8, 0.7) = 0.6 Fsam,son,bob = max(0.2, 0.1, 0.1) = 0.2 DIsam,son, bob = 0.4 Plsam, son, bob = 0.8 Igsam, son, bob = 0.2
Integrating Belief Measures with Frames • Daemons • Can be activated based on belief masses, DI, EI, Ig and Pl values. • Can act on DI, EI, Ig, Pl values for further processing. • E.g. if it is likely that Sam is Bob’s son, and if the ignorance is less than 0.2, create a new frame School, and set Sam, Student, School relationship.
Frame Operations • add_frame, del_frame, add_rel, etc. etc. • More interesting operations include abstract: • Given a set of frames • Create a super-frame that is the parent of the set of frames. • Copy relations that occur in at least %of the set of frames to the superframe. • Set the belief masses to be a composition of all the belief masses in the set for that relation.
Application ExamplesDiscourse Understanding • Discourse can be translated to a machine understandable form before being cast as BAFs. • Discourse Representation Structures (DRS) are particularly useful. • Algorithm to convert from DRS to BAF is trivial [Tan03].
Application ExamplesDiscourse Understanding • Setting Belief Masses • Initial belief masses may be set using fuzzy-sets. • E.g. to model a person being helpful • Shelpful = {1.0/”invaluable”, 0.75/”very helpful”, 0.5/”helpful”, 0.25/”unhelpful”, 0.0/”uncooperative”} • If we say that Kenny is very helpful, we can set: • Tkenny_helpful = 0.75 • Fkenny_helpful = 1.0 - 0.75= 0.25
Application ExamplesDiscourse Understanding • Further propositions and rules may be inserted into the knowledge base to perform reasoning on the initial belief masses. • Propositions and rules modeled as prolog clauses.