180 likes | 286 Vues
This research aims to address the uncertainty associated with events in active systems, such as illegal stock trading and noisy sensors, by developing a language and execution model. The project involves defining languages to represent uncertainty regarding events, creating probabilistic logics for inference, and quantifying event occurrence probabilities. The study builds upon existing deterministic systems, but focuses on handling uncertainty efficiently. Rule examples and inference issues, such as indeterminism and termination, are also discussed.
E N D
stockQuote stockPurchase illegalStockTrading stockValue stockSell Technion – Israel Institute of Technology A Language and Execution Model for the Inference of Uncertain Events in Active Systems Segev Wasserkrug Advisors: Avigdor Gal, Opher Etzion
Active Systems • Systems that contains active (event-driven) components illegalStockTrading stockPurchase stockSell • Problem: not all events of interest are signaled. Some must be inferred • Solution: Deterministic event composition languages defined • Problem: Uncertainty associated with events. Examples: • Inherent uncertainty associated with inference. • Noisy/unreliable sensors in a sensor network • Illegal stock trading – inherently uncertain inference
Research Goal • A language and execution model for the inference of uncertain events in active systems: • Language: • Represent uncertainty regarding events • Define uncertain inference rules • Define semantics • Execution Model: • Algorithm(s) for event occurrence probability quantification • Calculate correct probabilities as defined by rule semantics • Based on rules and incoming information regarding event occurrence
E6 Er E4 E2 Probabilistic Logics (Representation) E1 E3 Existing Works • Deterministic Event Composition Systems (e.g. ODE, SMRL) • No uncertainty handling mechanism • Uncertainty regarding the occurrence time of events in distributed systems (Liebig Et. Al.) • Budding works in the area of sensor networks (Garofalakis Et. Al, Wang Et. Al.) • Bayesian Networks for the representation of noisy/unreliable sensor readings • Deterministic inference language – translated to individual probabilistic queries Builds Upon KBMC Bayesian Networks (Quantification)
e1 e2 1 2 3 4 5 e2 e1 1 2 3 4 5 6 Events and Event Histories • Event: An actual occurrence that is significant, instantaneous and atomic • Example: Illegal Stock Trading of Intel Corporation Stock at 10AM carried out by Customer1 • Can be represented by a tuple of values, e.g. illegalStockQuote1=<10,INTC, $1,000,000, Customer1> • Event History(eh): • The set of all events between two points in time • Set of ordered tuples • Event history between time t1 and t2 denoted by
Event Uncertainty Representation • Event Instance Data (EID): The information the system has about the event • May be uncertain • Uncertainty about the attributes as well as the actual occurrence • Represented by sets of tuples and associated probability, e.g. • {notOccurred} w. p. 0.3 • <5, INTC, $1,000,000, Customer1> w.p. 0.3 • <10,INTC, $1,000,000, Customer1> w.p. 0.4 • Equivalent representation: RV with marginal distribution
e1 e2 1 2 3 4 5 e2 e1 1 2 3 4 5 6 Probability Space • Different probability space Ptfor every point in time t • Defining the occurrence probability of events until t • Intuitive Definition of Pt – Possible worlds semantics: • Every event history the system considers possible is a “possible world” • where: • is the set of possible worlds (event histories) • is a s-algebra over the possible worlds • is a probability measure over Ft • Computationally convenient definition of Pt in terms of RVs • Each RV represents the uncertainty associated with a specific event (EID) • Semantically equivalent when overall number of events is finite • Possible event histories represented by a system event history EH - set of EIDs(RVs) E1,…,Em
Rule Example • Semantics: • Quantitative: • If event history is such that condition holds • New event considered possible with defined probability. • Attribute values of event defined by mapping expressions • If condition does not hold – new event not possible.
Rule Example (Cont.) • Semantics: • Qualitative (independence): • Given state of relevant events – inferred event is probabilistically independent of all other events • For the above rule, only EIDs corresponding to events of either class1 or class2 are relevant • Used to increase inference efficiency
Inference Example • EID E1 (corresponding to event e1 of type class1) reaches the system at time 5 • {notOccurred} w. p. 0.3 • {Occurred} w.p. 0.7 • EID E3 (class3)reaches the system at time 5: • {notOccurred} w. p. 0.5 • {Occurred} w.p. 0.5 • EID E2(class2)corresponding to event e2 reaches the system at time 5 • {notOccurred} w. p. 0.4 • {Occurred} w.p. 0.6
Inference Issues • Indeterminism: May different EIDs be inferred in different triggerings of the algorithm? • Resolved by defining an order on the rules • Termination: Can infinite rule triggering cycles occur? • A rule may be triggered at most once • Correctness: How to ensure that overall probability space adheres to specified semantics? • Create a Bayesian Network based on semantics • Efficiency: How inference efficiency be maintained while constantly updating Bayesian Network structure? • Smart updating of Bayesian Network Structure • Sampling algorithm that bypasses Bayesian Network construction
Bayesian Network Construction (Cont.) • r1: e1 and e2 both occur Þ e4 considered possible • Semantics + rule definition:
Bayesian Network Construction • r2: e2 and e3 both occur Þ e5 considered possible • Semantics + rule definition:
Bayesian Network Construction • r3: e4 occurs and e5 does not occur Þ e6 considered possible • Semantics + rule definition:
Example Probabilities • Updates to data: • e2 occurs with certainty 1 • e3 does not occur with certainty 1
Summary • Need and theory explained • Demonstrated by simple example • Solution contains: • Detailed underlying probabilistic theory • Advanced language • Three inference algorithms (language agnostic) • Infer from scratch • Updated inference • Sampling
Future Work • Enhance languages to include probabilistic predicates • Create learning algorithms to generate rules • Structure of rules – based on experience/expert knowledge • Probabilities inferred from historical data