1 / 48

Multi-Agent-Systems

Multi-Agent-Systems. These slides are based on the book M. Woolridge „An Introduction to Multi-Agent-Systems“, Wiley, 2001. They were used in the Lecture “Verteiltes Problemlösen: Software Agenten und Semantic Web”, TU-München, Dept. of Informatics, Summer 2006.

Télécharger la présentation

Multi-Agent-Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-Agent-Systems These slides are based on the book M. Woolridge „An Introduction to Multi-Agent-Systems“, Wiley, 2001 They were used in the Lecture “Verteiltes Problemlösen: Software Agenten und Semantic Web”, TU-München, Dept. of Informatics, Summer 2006 They can be used for free in any academic teaching scenario. Slides and Lecture: Dr. Georg Groh, TU-München, Germany.grohg@in.tum.de

  2. Multi-Agent-Systems Lecture 2

  3. 2 Deductive Reasoning Agents

  4. Didactic remark: Useful prerequisite for this lecture is a basic understanding of First-Order-Predicate-Logic (FOL) (syntax and semantics) and of basic principles of deduction in FOL (soundness and completeness, unification etc.).This knowledge can be obtained via chapters 6 and 7 of [3].

  5. 2.1 Logic and Reasoning Principles of Logic and Deduction (Form of reasoning) ( declarative programming) • Logic Language L with proper Syntax and Semantics (in most cases a fragment of FOL) • Sound and complete deduction formalism • Problem is stated via set of formulae Φ(knowledge base) and formula (theorem) φ to be proved. Deduction is syntactic manipulation of formulae following a fixed algorithm via a fixed set of deduction rulesρ (involving e.g. tableaux, unification, skolemization etc.) until proof is complete. • Many forms of formulae (e.g. Horn-logic (PROLOG)) and deduction-algorithms (e.g. SLD-resolution (PROLOG))

  6. 2.1 Logic and Deduction Principles of Logic and Deduction ( declarative programming)  Agents as theorem provers • Transduction problem: How to translate real world into symbolic description (OCR, robot vision, speech recognition etc.). • Representation problem: How can a symbolic description language be found, whose syntax and semantics reflect the world that is to be described. (E.g. FOL) • Reasoning problem: algorithmic derivation of new knowledge (symbolic descriptions) from existing knowledge Modeling

  7. 2.2 Agents as Theorem Provers I • Definition deliberative agent or agent architecture: • contains an explicitly represented, symbolic model of the world • makes decisions (e.g. about what actions to perform) via symbolic reasoning Agent as theorem prover / declarative implementation of formal agent model • L : FOL with enumerable signature • D=2L : set of possible agent databases (knowledge bases): D={Δ0, Δ1,...} • if formula φ can be proved from database Δ using set of deduction rules ρ

  8. 2.2 Agents as Theorem Provers II Database Δ takes role of internal state i;  beliefs of the agent.Example: Δ={open(valve221), pressure(tank776, 28)} Agent action see next Δ Sensor Input Action Output Environment

  9. 2.2 Agents as Theorem Provers II function action(Δ:D) returns α:A { for each α A do { if( ) { return α } } for each α A do { if( ) { return α } } return null } FO-Logic: Do(α) : Do is predicate; α is individuum symbol

  10. 2.3 Example: The Vacuum World I Agent‘s objective: suck up all dirt • Possible actions:A={turn, forward, suck}(turn = turn right 90 degrees) • Domain-Predicates (Facts)In(x,y) Dirt(x,y) Facing(d)(d from{south, north, west, east}) • Agent’s next function is: where and computes new Facts

  11. 2.3 Example: The Vacuum World II • Agents database-rules: Objective: Traversal: and for all other rows accordingly

  12. 2.3.1. A note on wording issues and an „error“ in [3],[5] I • In classic Logic terminology a Deduction Ruleis a rule that allows to syntactically deduce new formulas from given formulas. Together with the rule-processing algorithm (essentially in most cases an ordering of deduction rules) these deduction rules allow for reasoning (deduction).Example: Modus Ponens: F1, F1 F2 F2

  13. 2.3.1. A note on wording issues and an „error“in [3],[5] II • what is called in [1],[5] a deduction rule is in fact just a formula in the agent database (knowledge base). • In an agent’s database we have facts like • and we can have formulas (called rules) likewhich are also in the database. Deduction Rules (in the usual logic sense) are not in the database. They are part of the reasoning framework. (compare: program vs. interpreter) • In Description Logics, the set of facts in the database is called A-Box and the set of rules in the database is called T-Box. (see Part II of lecture) In(1,1) Dirt(0,2) Dirt(1,2) Facing(West)

  14. 2.4 Performance & Realization Issues • In full FOL it is undecideable whether a formula follows logically (or syntactically via a deduction algorithm) from a set of formulae (database) • Sound and complete deduction for FOL is NP-hard  we cannot expect an efficient and reliable logic-based agent if we do not restrict the expressiveness of the logic used • see-function: Very hard to express output of vision system in logic • dyamic environments require temporal reasoning  very difficult  combine logic approach with procedural approaches

  15. 2.5 Agent-Oriented Programming (AOP) I • AOP suggested by Y.Shoham (1993) • AOP starts from Intentional Stance: Program agents in terms of mentalistic notions (e.g. belief, desire, intention) • In fact: follows paradigm of declarative programming AGENT 0 • AGENT0 is LISP based AOP-language • Each agent in AGENT0 has 4 components: • a set of capabilities (things the agent can do) • a set of initial beliefs • a set of initial commitments (things the agent will do) • a set of commitment rules

  16. 2.5 Agent-Oriented Programming (AOP) II AGENT 0 • In the language of 2.2, 2.3.1 • beliefs roughly correspond to database facts • commitment rules correspond to database rules • Actions: one of 2 types: • private: internally executed computation • communicative: sending messages • Messages: one of 3 types: • requests to commit to action • unrequests to refrain from actions • informs which pass on information

  17. 2.5 Agent-Oriented Programming (AOP) III AGENT 0 • Each commitment rule contains: • message condition • mental condition • action • On each ‘agent cycle’: • message condition is matched against messages the agent has received • mental condition is matched against beliefs of the agent • If rule fires, then agent becomes committed to the action (the action gets added to the agent’s commitment set)

  18. 2.5 Agent-Oriented Programming (AOP) IV AGENT 0 • Example for a commitment rule COMMIT( ( agent, REQUEST, DO(time, action) ), ;;; msg condition ( B, [now, Friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)] ), ;;; mental condition self, DO(time, action) ) message-condition mentalcondition

  19. 2.5 Agent-Oriented Programming (AOP) V • This rule may be paraphrased as : “if I receive a messagefrom agent which requests me to do action at time, and I believe that: -- agent is currently a friend -- I can do the action -- At time, I am not committed to doing any other action then commit to doing action at time” COMMIT( ( agent, REQUEST, DO(time, action) ), ;;; msg condition ( B, [now, Friend agent] AND CAN(self, action) AND NOT [time, CMT(self, anyaction)] ), ;;; mental condition self, DO(time, action) )

  20. 2.6 Concurrent MetateM I • Concurrent MetateM (M.Fisher,1994): multi-agent language in which each agent is programmed by giving it a temporal logicspecification of its behavior which is executed directly • Temporal Logics: Modal Variants of FOL; can express “how the truth of propositions changes over time”[5]. (more on Modal Logic see [1] (chapter 12); more on Temporal Logic see [3] (chapter 15)) simple example coming up • Each Agent is defined by: • Interface • MetateM-program ([1]:“Computational Engine“) • Agents communicate by Broadcast-Messaging • Agents execute their objectives concurrently

  21. 2.6 Concurrent MetateM II Agent Interface • Agent Interface consists of: • Unique agent-indentifier • Environment Propositions: Set of message propositions accepted from other agents • Component Propositions: Set of message propositions that can be sent to other agents Example: stack(pop,push)[popped,full]

  22. 2.6 Concurrent MetateM III MetateM Program • Set of rules of the form antecedent (past)  consequent (present and future) • Gabbay (1989): Paradigm: “declarative past, imperative future” • Specifying antecedents and consequents: Very basic example: Propositional Metate Logic (PML)

  23. 2.6 Concurrent MetateM IV Propositional MetateM Logic (PML) operator „meaning“ precisely: φφ φ φ φ φ φ φ is true ‚tomorrow‘φ was true yesterdayφ was true yesterdayat some time in the future φalways in the future φat some time in the past φalways in the past φ ti: „now“ ; t0: “start“

  24. 2.6 Concurrent MetateM V Propositional MetateM Logic (PML) operator „meaning“ precisely: φ will be true until ψφ has been true since ψφ will be true while ψφ is true zince ψ φUψφS ψφW ψφZ ψ φUψφS ψφW ψφZ ψ ti: „now“ ; t0: “start“

  25. 2.6 Concurrent MetateM VI Propositional MetateM Logic (PML) • Examples: important(agents) „It is now and always will be true that agents are important“ important(carolin) „sometime in the future, Carolin will be important“ ¬ friends(us) U apologize(you) „we are no friends until you apologize“ apologize(you) „tomorrow you apologize“

  26. 2.6 Concurrent MetateM VII Agent Cycle for each agent • Update history by receiving matching (wrt. to interface) environment proposition messages from other agents • Check which rules fire (compare antecedents against history) • execute / communicate consequents from fired rules and commitments carried over from past cycles

  27. 2.6 Concurrent MetateM VIII Example for MetateM programs of 3 agents rp(ask1,ask2)[give1,give2]:ask1  give1;ask2  give2; start   ¬(give1 ۸ give2); time rp rc1 rc2 0 -- ask1 -- 1 ask1 ask1 ask1, ask2 rc1(give1)[ask1]: start  ask1;ask1  ask1; 2 ask1,ask2 ask1 ask1 give1 3 ask1,give2 give1ask1, ask1 ask2 rc1(ask1,give2)[ask2]:(ask1 ۸ ¬ask2) ask2; 4 ask1,ask2, ask1 ask1, give1 give2 5 ... ... ...

  28. 3 Practical Reasoning Agents

  29. 3.1 Practical Reasoning = Deliberation + ME-Reas. I • From the Undecidability and Complexity of unrestricted FOL-reasoning  Full general FOLwith a general purpose FOL reasoning mechanism not suitable for agent-architectures. (Same arguments apply to e.g. Ontology languages  will lead to Description Logics (see part II)) • For Agents: Practical Reasoning: Reasoning towards actions (“figuring out what to do”) „Practical Reasoning is a a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires / values / cares and what the agent believes“ (Bratman (see [1]))

  30. 3.1 Practical Reasoning = Deliberation + ME-Reas. II • (“Theoretical Reasoning”: directed towards beliefs.  Practical Reasoning: directed towards actions.) • Deciding what state of affairs we want to achieve: Deliberation. Deciding how we want to achieve this state: Means End Reasoning (MER) • Result of MER: plan.  Then: commit to plan (or post-cond. of plan rsp.), execute plan  hope: goal reached • Dynamic environments, computational costs of deliberation  deliberation cannot last indefinitely. Agent must control its deliberation-reasoning • goals (plan-post-cond.) that agent has committed to: “intentions”

  31. 3.1 Practical Reasoning = Deliberation + ME-Reas. III Intentions • (Ordinary Human point of view: Intentions ~ actions; Intentions ~ states of mind) • Intentions “stronger” than desires (wrt. to actions) (E.g.: Desire to be more attractive vs. intention to go to fitness center) • Here: Intentions are states  directlylead towards commitment to actions (plans) to achieve state.  Intentions drive MER(but: commitment may change if other actions are more important) • Intentions persist. (Problem: decide when to stick to intentions and when to give them up (compare “functional vs. reactive”))

  32. 3.1 Practical Reasoning = Deliberation + ME-Reas. IV Intentions • Intentions influence reasoning: Alternatives inconsistent with intentions  ruled out.  (welcome) constraint on alternatives • Intentions influence beliefs: “Agent should be believe what he intends” • Intention-Belief-Inconsistency: Intention(φ) while Belief(not_able_to_do(φ)) : Irrational, unacceptable • Intention-Belief-Incompleteness: Intention(φ) while Belief(will_not_be_the_case(φ)) : May be acceptable „asym-metry“ (Bratman (see [1]))

  33. 3.1 Practical Reasoning = Deliberation + ME-Reas. V • Beliefs, Desires, Intentions: Symbolically represented: • Deliberation = <option, filter> • Option generation function option generates desires (goals) • Filtering function filter selects intentions (commitments) • Belief revision function brf updates beliefs (compare next function in former model)

  34. 3.2 Means End Reasoning I goal/intention/task state of environment possibleaction • MER: In AI known as Planning • Input to Planning / MER system: • Intention (Committed goal, “task”, maintenance or avoidance task,...) • Current agent beliefs (current state of environment) • Effectoric capabilities (possible actions) • Output: Plan (Sequence of actions) leading to goal • First planning system: STRIPS: State of the world: FOL formulas. Actions: pre-condition φ; post-condition ψ planner plan to achieve goal

  35. 3.2 Means End Reasoning II Planning. Example: STRIPS & The Blocks World A • Three Blocks (A, B, C) and robot arm: B C On(X,Y) OnTable(X)Clear(X)Holding(X)ArmEmpty Block X is on top of YBlock X is on tableNothing is on top of XArm holds Block XArm holds nothing • Set of Predicates for World Descriptions: • Current world state (agents belief): {Clear(A), On(A,B), OnTable(B), OnTable (C), Clear(C), ArmEmpty} • Example for Des or Int: {OnTable(A), OnTable(B), OnTable(C)}

  36. 3.2 Means End Reasoning III Planning. Example: STRIPS & The Blocks World • Action α from a set of actions is triple • Pα : set of FOL formulae: Pre-conditions of α • Dα : set of FOL formulae: Delete-set of α • Aα : set of FOL formulae: Add-set of α Actions (Operators) Stack(X,Y) UnStack(X,Y) Pα = {Clear(Y), Holding(X)}Dα = {Clear(Y), Holding(X)}Aα = {ArmEmpty, On(X,Y)} Pα = {Clear(X), On(X,Y), ArmEmpty}Dα = {On(X,Y), ArmEmpty}Aα = {Holding(X), Clear(Y)}

  37. 3.2 Means End Reasoning IV Planning. Example: STRIPS & The Blocks World Pickup(X) PutDown(X) Pα = {Holding(X)}Dα = {Holding(X)}Aα = {ArmEmpty, OnTable(X)} Pα = {Clear(X), OnTable(X), ArmEmpty}Dα = {OnTable(X), ArmEmpty}Aα = {Holding(X)} • Planning problem is triple • Δ : Initial beliefs of agent • : set of action (operator) descriptions • : set of intentions (tasks) • Plan is sequence of actions

  38. 3.2 Means End Reasoning IV Planning. Example: STRIPS & The Blocks World • Wrt. Planning problem a plan determines a sequence of n+1 Environment Models with and for • Plan is acceptable iff for • Plan is correct iff it is acceptable and • Notation: If is set of beliefs and is set of intentions we write: if plan is correct wrt. planning problem

  39. 3.2 Means End Reasoning VI Planning. Example: STRIPS & The Blocks World • Overall means-end reasoning capability of agent implemented in function • Instead of computing plan anew every time: choose plan with matching pre-condition and final post-condition out of plan library

  40. 3.3 Implementing Practical Reasoning Agents I Overall Agents Control Loop • Notations: head(π) ,Ac)

  41. 3.3 Implementing Practical Reasoning Agents II Overall Agents Control Loop head(π)

  42. 3.3 Implementing Practical Reasoning Agents III Overall Agents Control Loop ,Ac)

  43. 3.3 Implementing Practical Reasoning Agents IV Commitment to ends and means • option passes filter commitment to option • Question: How long will commitment last? What circumstances change commitment? • Rao&Georgeff([8]): Three commitment strategies: • Blind commitment:Maintain intention until believe that intention has been achieved • Single-minded commitmentMaintain intention until believe that intention has been achieved or it is impossible to achieve intention • Open-minded commitment:Maintain intention as long as still believed possible

  44. 3.3 Implementing Practical Reasoning Agents IV Commitment to ends and means • More formally (special modal logics of Rao&Georgeff([8]): • Blind commitment: • Single-minded commitment • Open-minded commitment:

  45. 3.3 Implementing Practical Reasoning Agents V Commitment to ends and means • overall Agents control loop from 3.3: single minded commitment: maintain commitment until succeeded or believe intention is impossible or nothing more to execute (lines 10-20) • intention is both to ends and means (intention to ends implies intention to means to reach ends (goals)) •  question how long commitment lasts  How often will agent deliberate  how often will reconsider be executed? • if option and filter were computationally cheap  reconsider after every step. But both are costly! • commitment changes too often  no “rationality”. Commitment never changes  too inflexible (compare functional vs. reactive)

  46. 3.3 Implementing Practical Reasoning Agents VI Commitment to ends and means • Formally: “degree of boldness” (number of steps between reconsider) vs. “rate of environment change” (number of steps between external environment changes) • If reconsider is true (agent chooses to deliberate) but intentions are not changed  deliberation was useless • Simply: reconsider is optimal if: Should have changed intentions? Chose to deliberate Changed intentions? reconsider optimal? no no * yes * no yes no yes no * no yes yes yes *

  47. Bibliography I • [1] M. Woolridge: „An Introduction to Multi-Agent-Systems“, Wiley, 2001, 294 pages • [2] G. Antoniou, F. van Harmelen: „A Semantic Web Primer“, MIT Press, 2004, 392 pages • [3] S. Russell, P. Norvig: „Artificial Intelligence: A Modern Approach“, Prentice-Hall, 1995, 998 pages • [4] J.Schlichter, U.Borghoff: „Computer-supported Cooperative Work“, Springer, 2000, 456 pages • [5] G.Weiß (ed.): „Multi-Agent-Systems“, MIT-Press, 1999, 876 pages • [6] J. Rosenschein, Univ. of Jerusalem, Lecture Slides „Introduction to MAS“, available through www.csc.liv.ac.uk/~mjw/pubs/imas (website for [1])

  48. Bibliography II • [6] W.Wörndl, TU-München, Lecture-slides for „Verteiltes Problemlösen“, SS 2005 • [7] M.Richter „Prinzipien der künstlichen Intelligenz“, Teubner, 1984 • [8] A.Rao, M.Georgeff: „Modeling Rational Agents within a BDI-Architecture“, Proc. Knowledge Representation and Reasoning KR&R 1991

More Related