1 / 31

Default and Cooperative Reasoning in Multi-Agent Systems

Default and Cooperative Reasoning in Multi-Agent Systems. Chiaki Sakama Wakayama University, Japan. Programming Multi-Agent Systems based on Logic Dagstuhl Seminar, November 2002. Incomplete Knowledge in Multi-Agent System (MAS). An individual agent has incomplete knowledge in an MAS.

lyle
Télécharger la présentation

Default and Cooperative Reasoning in Multi-Agent Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Default and Cooperative Reasoning in Multi-Agent Systems Chiaki Sakama Wakayama University, Japan Programming Multi-Agent Systems based on Logic Dagstuhl Seminar, November 2002

  2. Incomplete Knowledge in Multi-Agent System (MAS) • An individual agent has incomplete knowledge in an MAS. • In AI a single agent performs default reasoning when its knowledge is incomplete. • In a multi-agent environment, caution is requested to perform default reasoning based on an agent’s incomplete knowledge.

  3. Default Reasoning by a Single Agent Let A be an agent and F a propositional sentence. When A |≠ F F is not proved by A and ~F is assumed by default (negation as failure).

  4. Default Reasoning in Multi-Agent Environment Let A1,…,An be agents and F a propositinal sentence. When A1|≠ F (†) F is not proved by A1but F may be proved by other agents A2,…,An . ⇒ It is unsafe to conclude ~F by default due to the evidence of (†).

  5. Default Reasoning v.s. Cooperative Reasoning in MAS • An agent can perform default reasoning if it is based on incomplete belief wrt an agent’s internal world. • Else if an agent has incomplete knowledge about its external world, it is more appropriate to perform cooperative reasoning.

  6. Purpose of this Research • It is necessary to distinguish different types of incomplete knowledge in an agent. • We consider a multi-agent system based on logic and provide a framework of default/cooperative reasoning in an MAS.

  7. Problem Setting • An MAS consists of a finite number of agents. • Every agent has the same underlying language and shared ontology. • An agent has a knowledge base written by logic programming.

  8. Multi-Agent Logic Program (MLP) • Given an MAS {A1 ,…, An } with agents Ai (1≦i≦n), a multi-agent logic program (MLP) is defined as a set { P1 ,…, Pn } where Pi is the program of Ai . • Pi is an extended logic program which consists of rules of the form: L0 ← L1 ,…, Lm , notLm+1 ,…, notLn where Li is a literal and not represents negation as failure.

  9. Terms / Notations • Any predicate appearing in the head of no rule in a program is called external, otherwise, it is called internal . A literal with an external/internal predicate is called an external/internal literal. • ground(P): ground instantiation of a program P. • Lit(P): The set of all ground literals appearing in ground(P). • Cn(P) = { L | L is a ground literal s.t. P |= L }.

  10. Answer Set Semantics Let P be a program and S a set of ground literals satisfying the conditions: 1. PS is a set of ground rules s.t. L0← L1 ,…, Lmis in PS iff L0← L1 ,…, Lm , not Lm+1 ,…, notLnis in ground(P) and { Lm+1 ,…, Ln}∩ S =φ. 2.S = Cn( PS ). Then, S is called an answer set of P.

  11. Rational Agents • A program P is consistent if P has a consistent answer set. • An agent Ai is called rational if it has a consistent program Pi. • We assume an MAS {A1 ,…, An } where each agent Ai is rational.

  12. Semantics of MLP Given an MLP {P1,…, Pn}, the program Πi is defined as (i) Pi ⊆Πi (ii) Πi is a maximal consistent subset of P1 ∪ ・・・ ∪ Pn A set S of ground literals is called a belief set of an agent Ai if S=T∩Lit(Pi ) where T is an answer set of Πi .

  13. Belief Sets • An agent has multiple belief sets in general. • Belief sets are consistent and minimal under set inclusion. • Given an MAS { A1 ,…, An }, an agent Ai(1≦i≦n) concludes a propositional sentence F (written Ai |=F ) if F is true in every belief set of Ai .

  14. Example Suppose an MLP { P1, P2 } such that P1:travel( Date, Flight# ) ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). reserve( Date, Flight# ) ← flight( Date, Flight# ), notstate( Flight#, full ). date( d1 )←. date( d2 )←. scheduled(d1)←. flight( d1, f123 )←. flight( d2, f456 )←. flight( d2, f789 )←. P2:state( f456, full ) ← .

  15. Example (cont.) The agent A1 has the single belief set { travel( d2, f789 ) , reserve( d2, f789 ), date( d1 ), date( d2 ), scheduled(d1), flight( d1, f123 ), flight( d2, f456 ), flight( d2, f789 ), state( f456, full )}

  16. Example Suppose an MLP { P1, P2, P3 } such that P1 : go_cinema ←interesting, not crowded ¬ go_cinema ← ¬ interesting P2: interesting ← P3: ¬ interesting ← The agent A1 has two belief sets: { go_cinema, interesting } { ¬ go_cinema, ¬ interesting }

  17. Abductive Logic Programs ・An abductive logic program is a tuple 〈P,A〉 where P is a program and A is a set of hypotheses (possibly containing rules). ・〈P,A〉 has a belief set SH if SH is a consistent answer set of P ∪ H where H⊆A. ・A belief set SH is maximal(with respect to A) if there is no belief set TK such that H⊂K .

  18. Abductive Characterization of MLP Given an MLP {P1 ,…, Pn} , an agent Ai has a belief set S iff S=TH∩Lit(Pi) where TH is a maximal belief set of the abductive logic program 〈 Pi ; P1∪・・・∪ Pi-1 ∪ Pi+1 ∪・・・∪ Pn 〉.

  19. Problem Solving in MLP • We consider an MLP {P1 ,…, Pn} where each Pi is a stratified normal logic program. • Given a query ← G, an agent solves the goal in a top-down manner. • Any internal literal in a subgoal is evaluated within the agent. • Any external literal in a subgoal is suspended and the agent asks other agents whether it is proved or not.

  20. Simple Meta-Interpreter solve(Agent, (Goal1,Goal2)):- solve(Agent,Goal1), solve(Agent,Goal2). solve(Agent, not(Goal)):- not(solve(Agent, Goal)). solve(Agent, int(Fact)):- kb(Agent, Fact). solve(Agent, int(Goal)):- kb(Agent, (Goal:-Subgoal)), solve(Agent, Subgoal). solve(Agent, ext(Fact)):- kb(AnyAgent, Fact). solve(Agent, ext(Goal)):- kb(AnyAgent, (Goal:-Subgoal)), solve(AnyAgent, Subgoal).

  21. Example Recall the MLP { P1, P2 } with P1:travel( Date, Flight# ) ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). reserve( Date, Flight# ) ← flight( Date, Flight# ), notstate( Flight#, full ). date( d1 )←. date( d2 )←. scheduled(d1)←. flight( d1, f123 )←. flight( d2, f456 )←. flight( d2, f789 )←. P2:state( f456, full ) ← .

  22. Example (cont.) Goal:← travel( Date, Flight# ). P1:travel( Date, Flight# ) ← date( Date ), notscheduled( Date ), reserve( Date, Flight# ). G:  ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ).

  23. Example (cont.) G: ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). P1:date(d1)← , date(d2) ← G:← not scheduled(d1), reserve(d1,Flight# ). P1:scheduled(d1)← fail

  24. Example (cont.) ! Backtrack G: ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). P1:date(d1)← , date(d2) ← G:← not scheduled(d2), reserve(d2, Flight# ). G:← reserve(d2, Flight# ).

  25. Example (cont.) G:← reserve(d2, Flight# ). P1:reserve( Date, Flight# ) ← flight( Date, Flight# ), notstate( Flight#, full ). G: ← flight( d2, Flight# ),notstate( Flight#, full ). P1:flight( d2, f456 )←. flight( d2, f789 )←. G: ← notstate( f456, full ). ! Suspend the goal and ask P2 whether state( f456, full ) holds or not.

  26. Example (cont.) G: ← notstate( f456, full ). P2 : state( f456, full ) ← fail ! Backtrack G: ← flight( d2, Flight# ),notstate( Flight#, full ). P1: flight( d2, f456 )←. flight( d2, f789 )←. G: ← notstate( f789, full ).

  27. Example (cont.) G: ← notstate( f789, full ). ! Suspend the goal and ask P2 whether state( f789, full ) holds or not. The goal ← state( f789, full ) fails in P2 then G succeeds in P1 . As a result, the initial goal ← travel( Date, Flight# ) has the unique solution Date=d2 and Flight# = f789.

  28. Correctness Let {P1 ,…, Pn} be an MLP where each Pi is a stratified normal logic program. If an agent Ai solves a goal G with an answer substitution θ, then Ai |= Gθ, i.e., Gθ is true in the belief set of Ai .

  29. Further Issue • The present system suspends the goal with an external predicate and waits a response from other agents. • When an expected response is known, speculative computation [Satoh et al, 2000] would be useful to proceed computation without waiting for responses.

  30. Further Issue • The present system asks every other agent about the provability of external literals and has no strategy for adopting responses. • When the source of information is known, it is effective to designate agents to be asked or to discriminate agents based on their reliability.

  31. Summary • A declarative semantics of default and cooperative reasoning in an MAS is provided. Belief sets characterize different types of incompleteness of an agent in an MAS. • A proof procedure for query-answering in an MAS is provided. It is sound under the belief set semantics when an MLP is stratified.

More Related