1 / 31

Software Agent - architecture-

Software Agent - architecture-. Outline. Overview of agent architectures Deliberative agents Deductive reasoning Practical reasoning Reactive agents Hybrid agents Summary. Agent Architectures. An agent is a computer system capable of flexible autonomous action

keene
Télécharger la présentation

Software Agent - architecture-

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Agent- architecture-

  2. Outline • Overview of agent architectures • Deliberative agents • Deductive reasoning • Practical reasoning • Reactive agents • Hybrid agents • Summary

  3. Agent Architectures • An agentis a computer system capable of flexibleautonomous action • Autonomy, reactiveness, pro-activeness, and social ability • Kaelbling considers an agent architecture to be • A specific collection of software (or hardware) modules, typically designated by boxes with arrows indicating the data and control flow among the modules. A more abstract view of an architecture is as a general methodology for designing particular modular decompositions for particular tasks. • Maes defines an agent architecture as: • A particular methodology for building [agents]. It specifies how… the agent can be decomposed into theconstruction of a set of component modules and how these modules should be made to interact. The total set of modules and their interactions has to provide an answer to the question of how the sensor data and the current internal state of the agent determine the actions… and future internal state of the agent. An architecture encompasses techniques and algorithms that support this methodology.

  4. Brief History of Agent Architectures Originally (1956-1985), pretty much all agents designed within AI were symbolic reasoning agents Its purest expression proposes that agents use explicit logical reasoning in order to decide what to do Problems with symbolic reasoning led to a reaction against this — the so-called reactive agents movement, 1985–present From 1990-present, a number of alternatives proposed: hybrid architectures, which attempt to combine the best of reasoning and reactive architectures

  5. Types of Agent Architecture • Deliberative approach • Deductive reasoning agents • Practical reasoning agents • Reactive approach • Hybrid approach

  6. Deliberative Agents (1) • We define a deliberative agent or agent architecture to be one that: • It contains an explicitly represented, symbolic model of the world • It makes decisions (for example about what actions to perform) via symbolic reasoning • It suggests that intelligent behavior can be generated by such representation and manipulation of symbols • This paradigm is known as symbolic AI • Explicit symbolic model of the world in which decisions are made via logical reasoning, based on pattern matching and symbolic manipulation • Sense-plan-act problem-solving paradigm of classical AI planning systems • Examples of deliberative architectures • BDI • GRATE*, HOMER • Shoham: Agent-Oriented Programming

  7. Agent S e n s o r s E f f e c t o r s World Model Planner Plan executor Deliberative Agents (2)

  8. Problems of Deliberative Agents • Performance problems • Transduction problem • time consuming to translate all of the needed information into the symbolic representation, especially if the environment is changing rapidly. • Representation problem • how the world-model is represented in symbolically and how to get agents to reason with the information in time for the results to be useful. • Late results may be useless • Does not scale to real-world scenarios

  9. Deliberative Agents Deductive Reasoning (1) Agent action see ,  next Environment • How can an agent decide what to do using theorem proving? • Basic idea is to use logic to encode a theory stating the best action to perform in any given situation • Let: •  be this theory (typically a set of rules) •  be a logical database that describes the current state of the world • Ac be the set of actions the agent can perform • ├ mean that  can be proved from  using  • Agent internal state: • a set of rules  • the current state of the world 

  10. Deliberative Agents Deductive Reasoning (2) /* try to find an action explicitly prescribed*/ 1. for each a Acdo 2. if  ├Do(a) then 3. return a 4. end-if 5. end-for /* try to find an action not excluded*/ 6. for each a Acdo 7. if  ├ Do(a) then 8. return a 9. end-if 10. end-for 11. return null/* no action found*/

  11. Deliberative Agents Example of Deductive Reasoning (1) • Agent perception: only perceive dirt beneath it • Possible actions: Ac = {turn, forward, suck} • Goal: traverse the room continuously searching and clear dirt • The Vacuum World: A robot to clean up a house • Environment • 3 x 3 grids • The vacuum world changes with the random appearance and disappearance of dirt • Always starting at (0, 0) and facing north • Always a definite orientation d {north, south, west, or east}

  12. Deliberative Agents Example of Deductive Reasoning (2) • Representation of the world • Use 3 domain predicates to solve problem: In(x, y) agent is at (x, y) Dirt(x, y) there is dirt at (x, y) Facing(d) the agent is facing direction d • Update the world model of two stages • In and Facing predicates: update: D  Ac  D • Dirt predicate: next : D  Per  D • Rules  for determining what to do:

  13. Deliberative Agents Discussion of Deductive Reasoning • Advantages • Simple • Elegant logical semantics • Problems: • How to convert video camera input to Dirt(0, 1)? • Time complexity for reasoning (search the space) • decision making assumes a static environment: calculative rationality • decision making using logic is undecidable!

  14. Deliberative Agents Practical Reasoning (1) • Practical reasoning is reasoning directed towards actions • “Practical reasoning is a matter of weighing conflicting considerations for and against competing options, where the relevant considerations are provided by what the agent desires/values/cares about and what the agent believes.” (Bratman) • Human practical reasoning consists of two activities: • Deliberation: deciding what state of affairs we want to achieve • Means-ends reasoning: deciding how to achieve these states of affairs • The outputs of deliberation are intentions

  15. Deliberative Agents Practical Reasoning (2) state of environment goal/intention/task possibleactions • planner plan to achieve goal • Intentions • The state of affairs that an agent has chosen and committed to • It plays a crucial role in the practical reasoning process • Intentions drive means-ends reasoning • Intentions are persist • Intentions constrain future deliberations • Intentions are closely related to beliefs about the future • Means-end reasoning • Basic idea is to give an agent: • representation of goal/intention to achieve • representation actions it can perform • representation of the environment • Have it generate a plan to achieve the goal • Known as planning in AI community

  16. Deliberative Agents A B C B A C Example of Practical Reasoning (1) Clear(A)On(A, B)OnTable(B)OnTable(C) • The Blocks World • Contains a robot arm, 3 blocks (A, B, and C) of equal size, and a table-top • Use the closed world assumption: anything not stated is assumed to be false • Representation of the environment • On(x, y): obj x on top of obj y • OnTable(x); obj x is on the table • Clear(x): nothing is on top of obj x • Holding(x): arm is holding x • ArmEmpty: arm is empty • A goalis represented as a set of formulae • Here is a goal: OnTable(A) OnTable(B) OnTable(C)

  17. Deliberative Agents Example of Practical Reasoning (2) A B • Actions= {stack, unstack, pickup, putdown} • Actions are represented using STRIPS operators • Pre-condition/delete/add list notation • Each action has: • a name: which may have arguments • a pre-condition list: list of facts which must be true for action to be executed • a delete list: list of facts that are no longer true after action is performed • an add list: list of facts made true by executing the action • Example “stack”The stackaction occurs when the robot arm places the object xit is holding is placed on top of object y. Stack(x, y)preClear(y)  Holding(x)delClear(y)  Holding(x)addArmEmpty On(x, y)

  18. Deliberative Agents Practical Reasoning: Plan a142 a1 I G a17 • A plan • A sequence (list) of actions, Π=(a1, ....., an), determines n+1 environment state Δ0, Δ1,.........., Δn • A plan is correct • Δ0 is the initial state • the precondition of every action is satisfied in the preceding environment state • Δn is the goal state • Plan generation becomes search problem • Forward search • Backward search • Heuristic search

  19. Deliberative Agents Discussion of Practical Reasoning • Problem: deliberation and means-ends reasoning processes are not instantaneous. They have a time cost. • Suppose the agent starts deliberating at t0, begins means-ends reasoning at t1, and begins executing the plan at time t2. • Time to deliberate is tdeliberate = t1 – t0 • Time for means-ends reasoning is tme = t2 – t1 • So, this agent will have overall optimal behavior in the following circumstances • When deliberation and means-ends reasoning take a vanishingly small amount of time • When the world is guaranteed to remain static while the agent is deliberating and performing means-ends reasoning, so that the assumptions upon which the choice of intention to achieve and plan to achieve the intention remain valid until the agent has completed deliberation and means-ends reasoning • When an intention that is optimal when achieved at time t0 (the time at which the world is observed) is guaranteed to remain optimal until time t2 (the time at which the agent has found a course of action to achieve the intention).

  20. Behavior Languages • Brooks has put forward three theses • Intelligent behavior can be generated without explicit representations of the kind that symbolic AI proposes • Intelligent behavior can be generated without explicit abstract reasoning of the kind that symbolic AI proposes • Intelligence is an emergent property of certain complex systems • He identifies two key ideas that have informed his research • Situatedness and embodiment: ‘Real’ intelligence is situated in the world, not in disembodied systems such as theorem provers or expert systems • Intelligence and emergence: ‘Intelligent’ behavior arises as a result of an agent’s interaction with its environment. Also, intelligence is ‘in the eye of the beholder’; it is not an innate, isolated property

  21. Subsumption Architecture • A hierarchy of task-accomplishing behaviors • Each behavior is a rather simple rule-like structure • Each behavior ‘competes’ with others to exercise control over the agent • Lower layers represent more primitive kinds of behavior (such as avoiding obstacles), and have precedence over layers further up the hierarchy • The resulting systems are, in terms of the amount of computation they do, extremely simple • Some of the robots do tasks that would be impressive if they were accomplished by symbolic AI systems

  22. Reactive Agents (1) • Reactive agents have • at most a very simple internal representation of the world • provide tight coupling of perception and action • Behavior-based paradigm • Intelligence is a product of interaction between an agent and its environment. • Do we really need abstract reasoning?

  23. Agent Stimulus-response behaviours S e n s o r s E f f e c t o r s Action1 State1 State2 Action2 . . . . . . Staten Actionn Reactive Agents (2)

  24. Reactive Agents (3) Heater off, if temperature OK Heater on, otherwise • Each behavior continually maps perceptual input to action output • Reactive behavior: • action: S -> A where S denotes the states of the environment, and A the primitive actions the agent is capable of perform. • Example: • action(s) =

  25. Discussion of Reactive Agents • Advantages: simplicity, economy, computational tractability, robustness against failure, elegance • Problems • Agents without environment models must have sufficient information available from local environment • If decisions are based on local environment, how does it take into account non-local information (i.e., it has a “short-term” view) • Difficult to make reactive agents that learn • Since behavior emerges from component interactions plus environment, it is hard to see how to engineer specific agents (no principled methodology exists) • Typically “handcrafted” • Development takes a lot of time • Impossible to build large systems? • Can be used only for its original purpose

  26. Comparison between Two Approaches

  27. Hybrid Agents (1) • Combination of deliberative and reactive behavior • An agent consists of several subsystems • Subsystems that develop plans and make decisions using symbolic reasoning (deliberative component) • Reactive subsystems that are able to react quickly to events without complex reasoning (reactive component) • Examples: • InteRRaP • Touring Machines • Procedural Reasoning System (PRS) • 3T

  28. Agent Deliberative component E f f e c t o r s S e n s o r s World Model Planner Plan executor modifications observations Reactive component State1 Action1 State2 Action2 . . . . . . Staten Actionn Hybrid Agents (2)

  29. Hybrid Agents: Layered Architectures Action output Layern Layern Layern-1 Layern-1 . . . Action output Layer2 Sensor input . . . Layer2 Layer1 Layer1 Sensor input Action output

  30. Hybrid Agents: InteRRaP Social knowledge Cooperation layer Plan layer Planning knowledge Behaviour layer World model World interface actionoutput Perceptualinput

  31. Summary • Features of each agent architecture • Deliberative agents • Reactive agents • Hybrid agents • How to organize information processing of agents? • Hierarchical modeling • Functionalized modularization • Learning of internal modules • Next lectures • Implementation of agent architectures • Multi-agent systems

More Related