1 / 79

Agents

Agents. What is an agent?. Agenthood = 4 dimensions: autonomy proactiveness embeddedness distributedness. Autonomy. Programs are controlled by user interaction Agents take action without user control monitor: does anyone offer a cheap phone?

rusk
Télécharger la présentation

Agents

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agents

  2. What is an agent? • Agenthood = 4 dimensions: • autonomy • proactiveness • embeddedness • distributedness

  3. Autonomy Programs are controlled by user interaction Agents take action without user control • monitor: does anyone offer a cheap phone? • answer requests: do you want to buy a phone? • negotiate: negotiate delivery date. • can even refuse requests.

  4. Techniques for Autonomy Procedures are replaced by behaviors: map situation  action Programming agents = defining • behaviors • control architecture

  5. Result Agents can act on users' behalf, for example in looking for products or bidding in auctions (e­bay!) User might not always be available (mobile phones) Agents can represent users' interest, for example by choosing the best offers.

  6. Proactiveness Programs are activated by commands: run ... Agents take action by themselves: • to react to events in their environment • to propagate changes or distribute information • to take advantage of opportunities in the environment

  7. Techniques for proactive agents Agents must have explicit goals Goals are linked to plans for achieving them. Plans are continously reevaluated; new opportunities lead to replanning

  8. Result React to ranges of conditions rather than a set of foreseen situations Gain flexibility in information systems Proactive agents really act on user's behalf

  9. Embeddedness Programs take as long as they take Agents act under deadlines imposed by the real world: • scheduling, planning: plan must be finished before first step • trading: must follow the market • negotiation: limited response time and with limited resources (time, memory, communication) Asymptotic complexity analysis insufficient: does not give bounds for particular cases!

  10. Techniques for resource­bounded reasoning 1) ''Anytime'' algorithms: quick and suboptimal solution iterative refinement 2) reasoning about resource usage: estimate computation time  choose suitable computation parameters 3) learning, compilation

  11. Result Agent can integrate in the real world: • driving a car • bidding in an auction • interacting with a user or other agents

  12. Distributedness Programs have common data structures and algorithms Multi­agent systems model distributed systems; agents are independent entities and may: • be programmed by different people. • function according to different principles. • be added and removed during operation. • be unknown to other agents in the system.

  13. Techniques for distributed agent systems Agents run on platforms: • runtime environment/interfaces • communication langugages • support for mobility

  14. Result Agent system reflects structure of the real system: • controlled by their owners • local decision making with local information • fault tolerant: no central authority

  15. Summary Agents = situated software: • reacts to environment • under real time constraints • in distributed environment

  16. From Agents to Intelligent Agents People understand agents to have intentions: John studied because he wanted to get a diploma. and also: The system is asking for a filename because it wants to save the data. Modeling intentions: reasoning + intelligence!

  17. Situated Intelligence Agent interacts with its environment: • observe effects of actions • discovery • interaction with a user  particular software architectures

  18. behaviors world Behaviors • Simplest form of situated intelligence: feedback control Thermostat Robot following a wall Backup every new file

  19. Layers... Behaviors should adapt themselves • people leaving the house • robot hitting the end of the wall • backup unit broken down  Reasoning layer

  20. Planning/reasoning behaviors world

  21. Communication/cooperation Agents need to be instructed Multiple agents need to cooperate: • heating in different rooms • robots running into each other • several agents backing up the same file  cooperation layer

  22. Planning/reasoning behaviors world Other agents cooperation

  23. Importance of reasoning/planning layer Behaviors operate at level of sensors/effectors: Goto position (35.73,76.14) Communication is symbolic: Go to the corner of the room!  reasoning layer translates between them!

  24. Intelligent Agents Intelligence has (at least) 4 dimensions: • rationality: reasoning/planning layer • symbolic communication about beliefs, goals, etc. • adaptivity • learning

  25. Rational Agents Programs/Algorithms = always do the same thing rm ­r * wipes out operating system Rational agents = do the right thing rm ­r * will keep essential files

  26. Rationality: goals File manager: • satisfy user's wish • keep a backup of all major file versions • ... • keep one version of all essential operating system files  action serves to satisfy the goals!

  27. Complex behavior, intelligence: adapt behavior to changing conditions Negotiation/self­interest requires explicit goals! Learning and using new knowledge requires explicit structures

  28. Techniques for implementing rationality • symbolic reasoning • planning • constraint satisfaction  knowledge systems!

  29. Communicating agents Programs/objects  procedure call: • predefined set of possibilities Agents  communication language: • no predefined set of messages or entities Communication is about: • beliefs, when passing information • goals and plans, when cooperating and negotiating

  30. Agent Communication Languages Language = • syntax: predefined set of message types • semantics: common ontologies: sets of symbols and meanings Examples of languages: KQML, ACL

  31. Needed for Coordination, cooperation and negotiation among agents Communicate about intentions, self­interest ACL provides a higher abstraction layer that allows heterogeneous agents to communicate Add/remove agents in a running multi­agent system

  32. Adaptivity/Learning Adapt to user: • by explicit but highly flexible customization • automatically by observing behavior Learn from the environment: • know objects and operations • continuously improve behavior

  33. Techniques for adaptive/learning agents Knowledge systems: explicit representation of goals, operators, plans easy to modify Automatic adaptation by machine learning or case­based reasoning Information gathering/machine learning techniques for learning about the environment Reinforcement learning, genetic algorithms for learning behaviors

  34. Why is this important? Every user is different  requires different agent behavior Impractical to program a different agent for everyone Programmers cannot foresee all aspects of environment Agent knows its environment better than a programmer

  35. Summary: what are intelligent agents? Agents are a useful metaphor for computer science: • autonomous/proactive: behaviors • embedded: real­time • distributed • intentional: with explicit beliefs, goals, plans • communicative: through general ACL • self­adaptation/learning

  36. Smart Agents Collaborative Learning Agents Collaborative Interface Agents Agents Learn Cooperate Autonomous

  37. Autonomous agents Computational agents Biological agents Robotic agents Artificial life agents Software agents Entertainment agents Viruses Task-specific agents

  38. Implementing agents... Computers always execute algorithms • how can anyone implement agents? Agents are a metaphor, implementation is limited: • limited sensors  limited adaptativity • limited ontologies  limited communication language • ...

  39. Technologies for Intelligent Agents Methods for simple behaviors: • behaviors • reinforcement learning • distributed CSP Methods for controlling behaviors: • planning • case­based reasoning

  40. Technologies for Intelligent Agents Formalisms for cooperation: • auctions • negotiation • BDI (Belief-Desire-Intention) • ACL/KQML • Ontologies

  41. Technologies for Intelligent Agents Theories of agent systems: • self­interestedness • competition/economies • behavior of agent systems Platforms: • auctions, markets (negotiation, contracts) • multi­agent platforms • mobile agent platforms

  42. Agent languages

  43. Agent Communication Languages Structure: performatives + content language KQML Criteria for content languages

  44. Setting Communication among heterogeneous agents: • no common data structures • no common messages • no common ontology but common communication language: • protocol • reference

  45. Structure of an ACL Vocabulary (words): e.g. reference to objects Messages (sentences): e.g. request for an action Distributed Algorithms (conversations): e.g. negotiating task sharing

  46. Levels of ACL Object sharing (Corba, RPC, RMI, Splice): shared objects, procedures, data structures Knowledge sharing (KQML, FIPA ACL): shared facts, rules, constraints, procedures and knowledge Intentional sharing: shared beliefs, plans, goals and intentions Cultural sharing: shared experiences and strategies

  47. Human communication: intentional/cultural sharing Ideal example of a heterogeneous agent system: human society See agents as intentional systems: all actions and communication are motivated by beliefs and intentions Allows modeling agent behavior in a human­understandable way

  48. Problems with intentional sharing BDI model requires modal logics Many modal logics pose unrealistic computational requirements: • all consequences known • combinatorial inference  BDI model too general as a basis for agent cooperation

  49. A feasible solution: knowledge sharing ACL = 2 components: • performative: request, inform, etc. • content: a piece of knowledge Allows formulating distributed algorithms in a heterogeneous agent society Basis: human communication/speech acts

  50. Speech acts Language = • content (e.g., read a book) + • speech act (e.g., I want to, I want you to,...) Reference: • locution: physical utterance • illocution: act of conveying intentions • perlocutions: actions that occur as a result

More Related