1 / 23

WP7: Empirical Studies

WP7: Empirical Studies. Presenters: Paolo Besana, Nardine Osman, Dave Robertson. Outline of This Talk. Introduce overall framework Identify four key areas: Interaction availability Consistency interaction-peer Consistency peer-peer Consistency with environment.

liv
Télécharger la présentation

WP7: Empirical Studies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WP7: Empirical Studies Presenters: Paolo Besana, Nardine Osman, Dave Robertson

  2. Outline of This Talk • Introduce overall framework • Identify four key areas: • Interaction availability • Consistency interaction-peer • Consistency peer-peer • Consistency with environment In each of these areas it is impossible to guarantee the general property we ideally would require, so the goal of analysis is to identify viable engineering compromises and explore how they scale.

  3. Basic Conceptual Framework P1 EP1 M(P,R) EP P Pn EPn P = process name R = role of P M(P,R) = Interaction model for P in role R EP = environment of P

  4. Simulation as Clause Rewriting

  5. Ensuring Interactions are Available RR(P) →◊(M(P,R)M(P)  (i(M(P,R))→ ◊a(M(P,R)))) MP  P1 EP1 M(P,R) EP P Pn EPn R(P) = Roles P wants to undertake MP = Interactions known to P {M(P,R) , …} i(M(P,R)) = M(P,R) is initiated a(M(P,R)))) = M(P,R) is completed successfully

  6. Specific Question • Suppose that the same interaction patterns are being used repeatedly in overlapping peer groups. • To what extent can basic statistical information about success/failure of interaction models solve matchmaking problems? See Deliverable 7.1 for discussion of this

  7. Consistency Peer - Interaction Model AK(P)  (BK(M(P,R))◊BK(M(P,R))) → (A B)  K(P) K(M(P,R)) P1 EP1 M(P,R) EP P Pn EPn K(X) = Knowledge derivable from X (F) = F is consistent

  8. Specific Question • Each interaction model imposes temporal constraints • Peers have deontic constraints • What sorts of properties required by peers (e.g. trust properties) or by interaction modellers (e.g. fairness properties) can we test using this information alone.

  9. 1 2 3 4 M(P,R) Example In an auction, the auctioneer agent wants an interaction protocol that enforces truth telling on the bidders’ side. A = [bid(bidder,V)⇒win(bidder,PV)]⋀ [bid(bidder,B)⇒win(bidder,PB) ⋀ B≠V]⋀ PB≮PV where A∈K(P) We would like to verify: A∈K(P) ∧(B∈K(M(P,R))∨◊B∈K(M(P,R))) →σ(A∧B)

  10. 1 1 2 1 2 … 3 3 2 … 4 4 M(P,R) M(P,R) Verifying σ(A∧B) Verify M(P,R) satisfies A: • Is A satisfied at state 1? • If result is achieved, then terminate • else, go to next state(s) and repeat

  11. temporal properties interaction state-space deontic constraints Temporal Proof Rules Model Checker LCC Transition Rules Prolog engine Table XSB system Property Checking Framework

  12. Temporal Proof Rules satisfies(E,tt)  true satisfies(E,Φ1⋀Φ2)  satisfies(E,Φ1) ⋀ satisfies(E,Φ2) satisfies(E,Φ1⋁Φ2)  satisfies(E,Φ1) ⋁ satisfies(E,Φ2) satisfies(E,<A>Φ)  ∃F. trans(E,A,F)⋀satisfies(F,Φ) satisfies(E,[A]Φ)  ∀F. trans(E,A,F)⋀satisfies(F,Φ) satisfies(E,μZ.Φ)  satisfies(E,Φ) satisfies(E,νZ.Φ)  dual(Φ,Φ’) ⋀¬satisfies(E,Φ’)

  13. LCC Transition Rules trans(E::D,A,F)  trans(D,A,F) trans(E1orE2,A,F)  trans(E1,A,F)⋁trans(E2,A,F) trans(E1thenE2,A,E2)  trans(E1,A,nil) trans(E1thenE2,A,FthenE2)  trans(E1,A,F) ⋀ F≠nil trans(E1parE2,A,FparE2)  trans(E1,A,F) trans(E1parE2,A,E1parF)  trans(E2,A,F) trans(M⇐P,in(M),null)  true trans(M⇒P,out(M),null)  true trans(E←C,#(X),E)  XinC ⋀ sat(X) ⋀ sat(C) trans(E←C,A,F)  (A≠#)⋀sat(C)⋀trans(E,A,F)

  14. Consistency Peer - Peer AK(P) PiP(M(P,R))BK(Pi) → (A B)  K(P) K(P1) P1 EP1 M(P,R) EP P Pn EPn P(M(P,R)) = Peers involved in M(P,R)

  15. Specific Question • Agents in open environments may have different ontologies • Guaranteeing complete mappings between them is infeasible (ontologies can be inconsistent, can cover different domains, etc) • Agents are interested in performing tasks: mapping is required only for the terms contextual to the interactions • Repetition of tasks provides the basis for modelling statistically the contexts of the interactions • To what extent can interaction models can be used to focus the ontology mapping to the relevant sections of the ontology?

  16. Approach • Predicting the possible content of a message before processing can help to focus the mapping: • With no knowledge of the context and of the state of an interaction, a received message can be anything • the context can be used to guess the possible content of messages, filtering out unrelated elements • the guessed content is suggested to the ontology mapping engine • The entities in a received message mi(e1,...,en) are bound by the context of the interaction: • some entities are specific to the interaction type (purchase, request of information,...), • the set of possible entities is bound by concepts previously introduced in the interaction, • different entities may appear in a specific message with different frequencies

  17. Implementation Two phases: • Creating the model: • Entities appearing in messages are counted, obtaining their prior and conditional frequencies • Ontological relations between entities in different messages are checked and the verified relations are counted • Predicting the content of a message: • When a message is received, the probability distribution for all the terms is computed using the collected information and the current state of the interaction • The most probable terms form the set of suggestions for the ontology mapping engine The aim is to obtain the smallest possible set that is most likely to contain the entities actually used in the message.

  18. Mapping Evaluation Framework

  19. Testing • Interactions are abstract protocols, and agents have generated ontologies • allows us to simulate different types of relations between the messages • Community preferences over elements (best sellers, etc) are simulated by probability distributions • Interactions are run automatically hundreds of times • Results are compared with a uniform distribution of the entities (simulates no knowledge about context) • Equivalent size for same success rate • Equivalent success rate for same size of suggestion set

  20. Provisional Results • After 100 interactions, the predictor is able to provide a set smaller than 7% of the ontology size containing, 70% of the time, the term actually used in message m2 • If all terms are equiprobable, the probability is directly proportional to the size of the (randomly picked) set, as shown above.

  21. Consistency Peer - Environment AK(P)BK(EP)→ (A B)  K(EP) K(P) P1 EP1 M(P,R) EP P Pn EPn

  22. Specific Question • Suppose we have a complex environment with adversorial agents • For specific goals, how complex do interaction models need to be in order to raise group performance significantly?

  23. Environment Simulation Framework Coordinating peer Interaction model Simulated agents Environment simulator a(hunter,Id):: sawHimAt(Location) => a(hunter,RID)  visiblePlayer(Location) and strafeAttempt(Location,Location) or strafeAttempt(Location,Location)  sawHimAt(Location) <= a(hunter,RID) or movementAttempt(random_play) You can be a hunter if you send a message revealing the location of a visible opponent player upon whom you are making a strafing attack or make a strafing attack on a location if you have been told a player is there or otherwise just do what seems right Comparative performance random coordinated Group convergence

More Related