1 / 35

Autonomous Agents

Autonomous Agents . Overview . Topics. Theories: logic based formalisms for the explanation, analysis, or specification of autonomous agents. Languages: agent-based programming languages.

jacob
Télécharger la présentation

Autonomous Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Autonomous Agents Overview

  2. Topics • Theories: logic based formalisms for the explanation, analysis, or specification of autonomous agents. • Languages: agent-based programming languages. • Architectures: integration of different components into a coherent control framework for an individual agent.

  3. Topics • Multi-agent architectures: methodologies and architectures for group of agents (could be from different architectures) • Agent modeling: modeling other agents’ behavior or mental state from the perspective of an individual agent • Agent capabilities • Agent testbeds and evaluation

  4. Agent Theories, Languages, and Architectures Wooldridge & Jennings (ATAL 1994, LNAI 890)

  5. What is an agent? • Weak: • Autonomy • Social ability • Reactivity • Pro-activities • Strong: • Mental properties such as knowledge, belief, intention, obligation • Emotional • Others attributes: mobility, veracity, benevolence, rationality

  6. Agent Theories • How to conceptualize agents? • What properties should agents have? • How to formally represent and reason about agent properties?

  7. Agent Theories • Definition: an agent theory is a specification for an agent.  Formalisms for representing and reasoning about agent properties • Starting point: agent = entity ‘which appears to be the subject of beliefs, desires, etc.’

  8. Intentional system • An intentional system whose behavior can be predicted by the method of attributing belief, desires, and rational acumen • Proved that can be used to describe almost everything • Good as an abstract tool for describing, explaining, and predicting the behavior of complex systems

  9. Intentional system - Examples • One studies hard because one wants to get good GPA. • One takes the course ‘cs579-robotic’ because one believes that it will be fun. • One takes the course ‘cs579-robotic’ because there is no 500-level course offered. • One takes the course ‘cs579-robotic’ because one believes that the course is easy 

  10. Agent Attitudes • Information attitudes: related to the information that an agent has about the environment • Belief • Knowledge • Pro-attitudes: guide the agent’s actions • Desire • Intention • Obligation • Commitment • Choice • An agent should be represented in terms of at least one info-attitude and one pro-attitude. Why?

  11. Representing intentional notions Representing Jan believes Cronos is the father of Zeus naïve translation into FOL: Believe(Jan, Father(Zeus,Cronos)) • Problems: • No nested predicate • Zeus = Jupiter • Believe(Jan, Father(Jupiter,Cronos)) [Wrong] Conclusion: FOL is not suitable since intention is context dependent.

  12. Possible World Semantics • Hintikka: 1962 – Agent’s belief can be characterized as a set of possible worlds. • Example: • A door opener robot: door is closed, lock needs to be unlocked but the robot does not know if the lock is unlocked or not – two possibilities: • {closed, locked} • {closed, unlocked} • Card player (poker): ? • UNIX Ping command: ?

  13. Possible World Semantics • Each world represents a state that the agent believes it might be in given what it knows. • Each world is called a epistemic alternative. • The agent believes in something is true in all possible worlds. • Problem: logical omniscience – agent believes all the logical consequences of its belief  impossible to compute.

  14. Alternatives to PWS • Levesque – belief and awareness: explicit belief (small) from implicit belief (large). • No nested belief • The notion of a situation is unclear • Under certain situation: unrealistic prediction • Konolige – the deduction model: modeling the belief of a symbolic AI system (database of beliefs and an inference system). • Simple

  15. Others • Meta-language: one in which it is possible to represent the properties of another language • Problem: inconsistency • Pro-attitudes: goals and desires – adapting possible world semantics to model goals and desires • Problem: side effects

  16. Theory of agency • Realistic agent: • combination of different components • dynamic aspect • Moore – knowledge and action: study the problem of knowledge precondition for actions • I needs to know the telephone number of my friend Enrico in order to call him. • I can find the telephone number in the telephone book. • I needs to know that the course is easy before I sign up for it 

  17. Theory of agency • Cohen and Levesque – belief and goal: originally developed as a pre-requisite for a theory of speech acts but proved very useful in analysis of conflict and cooperation in multi-agent diaglogue, cooperative problem solving

  18. Theory of agency • Rao and Georgeff – belief, desire, intention (BDI) architecture: logical framework for agent theory based on BDI, used a branching model of time • Singh: logics for representing intention, belief, knowledge, know-how, communication in a branching-time framework

  19. Theory of agency • Werner: general model of agency based on work in economics, game theory, situated automate, situated semantics, philosophy. • Wooldridge: modeling multi-agent system

  20. Agent Architectures • Construction of computer systems with properties specified by an agent theory. • Three well-know architectures: • Deliberative • Reactive • Hybrid

  21. Deliberative architecture • View agent as a particular type of knowledge based system – known as symbolic AI • Contains an explicit represented, symbolic model of the world • Decision is made via logical reasoning (pattern matching, symbolic manipulation) • Properties: • Attractive from the logical point of view • High computational complexity (FOL: not decidable, with modalities: highly undecidable)

  22. Sense • Assimilate • Sensing results • Reasoning • Symbolic • representation • of the world • Determine what • to do next • Act • Execute the • action generated • by the reasoning • module ENVIRONMENT Deliberative architecture in picture

  23. Deliberative architecture • Examples: • Planning agents: a planner is an essential component of any artificial agent • Main problem: intractability – addressed by techniques such as hierarchical, non-linear planning. • IRMA (Intelligent Resource-bounded machine architecture): explicit representations of BDI & planning library, a reasoner, opportunity analyser, a filtering process, a deliberation process (mainly: reduced the time to deliberate)

  24. Deliberative architecture • HOMER: a prototype of an agent with linguistic capability, planning and acting capability. • GRATE*: layered architecture in which the behavior of an agent is guided by the mental attitudes of beliefs, desires, intentions, and joint intention.

  25. Reactive architecture • Proposed to overcome the weakness of symbolic AI • Main features: • does not include any kind of central symbolic world model • does not use complex reasoning

  26. Sense • Assimilate • Sensing results • Reasoning • Determine what • to do next • Act • Execute the • action generated • by the reasoning • module ENVIRONMENT Reactive architecture in picture

  27. Reactive architecture • Brook - behavior language: subsumption architecture • Hierarchy of task-accomplishing behaviors • Each behavior competes with others • Lower layer represents more primitive task and has precedence over upper layers • Very simple • Demonstrate that it can do a lot • Multiple subsumption agents

  28. Reactive architecture • Arge and Chapman – PENGI: most everyday activity is ‘routine’ • Once learned, a task becomes routine and can be executed with little or no modification • Routines can be compiled into a program and then updated from time to time (e.g. after new tasks are added)

  29. Reactive architecture • Rosenschein and Kaelbling - Situated automata • Agent is specified in declarative terms which are then compiled into digital machine • Correctness of the machine can be proved • No symbol manipulation in situated automata, thus efficient • Maes – Agent network architecture: an agent is a network of competency modules

  30. Hybrid architecture • Combine deliberative and reactive architecture – exploit the best out of the two • Georgeff and Lansky – Procedural Reasoning System: BDI & plan library, explicit symbolic representation of BDI • Beliefs are facts – FOL • Desires are represented by behavior • Each plan in the plan library is associated with invocation condition  reactive • Intention – the set of currently active plans

  31. Environment Plan Library Belief: FOL System Interpreter P1: Invocation I1 Desire: System beha. Pn: Invocation In Intention: Active Pi: Invocation Ii Pj: Invocation Ij PRS in picture

  32. Hybrid architecture • Ferguson – TOURINGMACHINES: • Perception and action subsystem – interact directly with the environment • Control framework system: three control layers – each is independent, activity producing, concurrently executing process • Reactive layer (response to events that happen too quickly for other to response) • Planning layer (select plan, actions to achieve goal) • Modeling layer (symbolic representation, use to resolve goal conflict)

  33. Hybrid architecture • Burmeister et al. – COSY: hybrid BDI with features of PRS and IRMA, for a multi-agent testbed called DASEDIS • Mueller et at. – INTERRAP: layered architecture, each layer is divided into knowledge and control vertical part

  34. Agent language • A system that allows one to program hardware and software computer systems in terms of some of the concepts developed by agent theorists. • Shoham – agent-oriented programming: • A logical system for defining the mental state of agents • An interpreted programming language for programming agents • An ‘agentification’ process, for compiling agent program into low-level executable systems  Agent0: first two features

  35. Agent language • Thomas – PLACA (Planning communicating agent language) • Fisher – Concurrent METATEM: correctness of the agents with respect to their specification • IMAGINE project: ESPIRIT • General Magic, Inc. – TELESCRIPT • Connah and Wavish - ABLE

More Related