1 / 88

KIMAS 2003 Tutorial

KIMAS 2003 Tutorial. The Craft of Building Social Agents. Henry Hexmoor University of Arkansas Engineering Hall, Room 328 Fayetteville, AR 72701. Content Outline. I. Introduction 1. History and perspectives on MultiAgent Systems 2. Architectural theories

phiala
Télécharger la présentation

KIMAS 2003 Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. KIMAS 2003 Tutorial The Craft of Building Social Agents Henry Hexmoor University of Arkansas Engineering Hall, Room 328 Fayetteville, AR 72701

  2. Content Outline • I. Introduction 1. History and perspectives on MultiAgent Systems 2. Architectural theories 3. Agent Oriented Software Engineering • II. Social agents 4. Sociality and social models 5. Dimensions for Developing a Social Agent Examples in Autonomy, Trust, Social Ties,Control,Team, Roles, Trust, and Norms 6. Agent as a member of a group... Values, Obligations, Dependence, Responsibility, Emotions • III. Closing 7. Trends and open questions 8. Concluding Remarks

  3. Definitions • An agent is an entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices, and commitments. [Yoav Shoham, 1993] • An entity is a software agent if and only if it communicates correctly in an agent communication language. [Genesereth and Ketchpel, 1994] • Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; action to affect conditions in the environment; and reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. [Hayes-Roth, 1995] • An agent is anything that can be viewed as (a)Perceiving its environment, and (b) Acting upon that environment [Russell and Norvig, 1995] • A computer system that is situated in some environment and is capable of autonomous action in its environment to meet its design objectives. [Wooldridge, 1999]

  4. Agents: A working definition An agent is a computational system that interacts with one or more counterparts or real-world systems with the following key features to varying degrees: • Autonomy • Reactiveness • Pro-activeness • Social abilities e.g., autonomous robots, human assistants, service agents

  5. The need for agents • Automation of dirty, dull, and dangerous as well as tedious, boring, and routine tasks to relieve humans of such duties. E.g., desktop assistants or intelligent in service of humans. • An improved human sense of “presence” for humans collaborating in physically disparate locations. E.g., knowledge management tasks like war-rooms and human users benefit from agents who proxy for their human counterparts. • Democratization of computing, services, and support. E.g., functions such as the department of motor vehicles or public libraries and virtual museums. • Reduction of redundancy and overlap due to competition. E.g., tracking and sharing power or telecommunication services.

  6. Agent Typology • Person, Employee, Student, Nurse, or Patient • Artificial agents: owned and run by a legal entity • Institutional agents: a bank or a hospital • Software agents: Agents designed with software • Information agent: Data bases and the internet • Autonomous agents: Non-trivial independence • Interactive/Interface agents: Designed for interaction • Adaptive agents: Non-trivial ability for change • Mobile agents: code and logic mobility

  7. Agent Typology • Collaborative/Coordinative agents: Non-trivial ability for coordination, autonomy, and sociability • Reactive agents: No internal state and shallow reasoning • Hybrid agents: a combination of deliberative and reactive components • Heterogenous agents: A system with various agent sub-components • Intelligent/smart agents: Reasoning and intentional notions • Wrapper agents: Facility for interaction with non-agents

  8. Falacies: What Agent-based Systems are not • Computational X where X is from the social sciences such as the economics • Agents are not middleware components • Agents are not Grid Services • Agents are not Internet software • Agents need not dwell online • Agent-based Systems are not necessarily decision-support systems • Agent-based Systems do not necessarily employ AI methods • Agents need not be implemented in specific programming languages or paradigms

  9. Multi-agency A multi-agent system is a system that is made up of multiple agents with the following key features among agents to varying degrees of commonality and adaptation: • Social rationality • Normative patterns • System of Values e.g., eCommerce, space missions, Intelligent Homes The motivation is coherence and distribution of resources.

  10. Summary of Business Benefits • Modeling existing organizations and dynamics • Modeling and Engineering E-societies • New tools for distributed knowledge-ware

  11. Two views of Multi-agency Constructivist: Agents are rational in the sense of Newell’s principle of individual rationality. They only perform goals which bring them a positive net benefit without regard to other agents. These are self-interested agents. Sociality: Agents are rational in the Jennings’ principle of social rationality. They perform actions whose joint benefit is greater than its joint loss.These are self-less, responsible agents.

  12. Multi-agent assumptions and goals • Agents have their own intentions and the system has distributed intentionality • Agents model other agents mental states in their own decision making • Agent internals are of less central than agents interactions • Agents deliberate over their interactions • Emergence at the agent level and at the interaction level are desirable • The goals is to find some principles-for or principled ways to explore interactions

  13. Abstract Architecture action action actions states Environment

  14. Architectures • Deduction/logic-based • Reactive • BDI • Layered (hybrid)

  15. Abstract Architectures • An abstract model: <States, Action, S*A> • An abstract view • S = {s1, s2, …} – environment states • A = {a1, a2, …} – set of possible actions • This allows us to view an agent as a function action : S*  A

  16. Logic-Based Architectures • These agents have internal state • See and next functions and model decision making by a set of deduction rules for inference see : S  P next : D x P  D action : D  A • Use logical deduction to try to prove the next action to take • Advantages • Simple, elegant, logical semantics • Disadvatages • Computational complexity • Representing the real world

  17. Reactive Architectures • Reactive Architectures do not use • symbolic world model • symbolic reasoning • An example is Rod Brooks’s subsumption architecture • Advantages • Simplicity, computationally tractable, robust, elegance • Disadvantages • Modeling limitations, correctness, realism

  18. BDI: a Formal Method • Belief: states, facts, knowledge, data • Desire: wish, goal, motivation (these might conflict) • Intention: a) select actions, b) performs actions, c) explain choices of action (no conflicts) • Commitment: persistence of intentions and trials • Know-how: having the procedural knowledge for carrying out a task

  19. Belief-Desire-Intention Environment belief revision act sense Beliefs generate options filter Desires Intentions

  20. A simplified BDI agent algorithm 1. B = B0; 2. I := I0; 3. while true do 4. get next perceptr; 5. B := brf(B,r); // belief revision 6. D:=options(B,D,I); // determination of desires 7. I := filter(B, D, I); // determination of intentions 8.p:= plan(B, I); // plan generation 9. execute p 10. end while

  21. Correspondences • Belief-Goal compatibility: Des  Bel • Goal-Intention Compatibility: Int  Des • Volitional Commitment: Int Do  Do • Awareness of Goals and Intentions: Des  BelDes Int  BelInt

  22. Layered Architectures • Layering is based on division of behaviors into automatic and controlled. • Layering might be Horizontal (I.e., I/O at each layer) or Vertical (I.e., I/O is dealt with by single layer) • Advantages are that these are popular and fairly intuitive modeling of behavior • Dis-advantages are that these are too complex and non-uniform representations

  23. Agent-Oriented Software Engineering • AOSE is an approach to developing software using agent-oriented abstractions that models high level interactions and relationships. • Agents are used to model run-time decisions about the nature and scope of interactions that are not known ahead of time.

  24. AOSE Considerations: Track 1 • Programming platforms (e.g., JACK) to support not just programming and design • What, how many, structure of agent? • Model of the environment? • Communication? Protocols? Relationships? Coordination?

  25. AOSE Considerations: Track 2 • Extending UML to support agent communication, negotiation etc. • Communication? Protocols? Relationships? Coordination?

  26. Gaia- Wooldridge, et al The Analysis phase: Roles model: - Permissions (resources) - Responsibilities (Safety properties and Liveliness properties) - Protocols Interactions model: purpose, initiator, responder, inputs, outputs, and processing of the conversation The Design phase: Agent model Services model Acquaintance model

  27. Tasks Sequence Diagrams Scott DeLoach’s MaSE Roles Agent Class Diagram Conversation Diagram Internal Agent Diagram Deployment Diagram

  28. Break– 5 minutes

  29. Content Outline • I. Introduction 1. History and perspectives on MultiAgent Systems 2. Architectural theories 3. Agent Oriented Software Engineering Break 5 minutes • II. Social agents 4. Sociality and social models 5. Dimensions for Developing a Social Agent Examples in Autonomy, Trust, Social Ties,Control,Team, Roles, Trust, and Norms Break 5 minutes 6. Agent as a member of a group... Values, Obligations, Dependence, Responsibility, Emotions • III. Closing 7. Trends and open questions 8. Concluding Remarks

  30. A Multiagent System Top level loop Initialize Groups, Interconnections For agents 1- n { While (1) { Sense (self, world, others) Reason (self, others) Act (physical, speech, social) } }

  31. Inside an agent… While (1) { Sense (self, world, others) Determine attitude (self, others) Reason (self, others) Act (physical, speech, social) }

  32. What is Sociality? • In interactions one individual’s thinking, feeling, and/or doing affects another individual. • “” may involve a social action, a social convention, and a personal rationality. Think, Feel, Do Think, Feel, Do Think, Feel, Do Think, Feel, Do

  33. What is Sociality? • An individual may engage collectives in interaction of thinking, feeling, and/or doing. • “” may involve a social action, a social convention, and a unit rationality. Think, Feel, Do Think, Feel, Do Think, Feel, Do Think, Feel, Do

  34. What is Sociality? • An agent may engage a human in interaction of thinking, feeling, and/or doing. • “” may involve a social action, a social convention, and a personal rationality. Think, Feel, Do Think, Feel, Do Think, Feel, Do Think, Feel, Do

  35. What is Social Action? • Social actions produce different kinds of influences. • For example actions involving Resources, Delegation, Permission, Help, and Service.

  36. What is Social Convention? • Social conventions prescribe transformations of social influences as well as shifts and changes in the transformations. • Examples: • Interpersonal tactics such as reciprocity, scarcity, and politeness. • Use of norms, values, plans, policies, protocols, and roles. • Following a conversational policy. • Emotional reactive responses • Cooperation logics • Adaptations and emergence rules

  37. What is Personal/Unit Rationality? • Personal/unit Rationality prescribes stance of an individual or a collective toward social conventions with respect to others. • An agent/collective might choose to follow or abandon social conventions either with all agents or selectively. • Social Rationality versus Individual Rationality

  38. Putting it together (CEBACR): A social model of interaction <Cognition, Emotions, Behaviors, Social Actions, Social Conventions, Personal/Unit Rationality, Embodiment>

  39. A Special Case of Do  Do Sociality • [Do]  [Do] • Actions are “buy” and “sell” • Social Conventions are conventions of bartering. • Personal/Unit Rationality is accounting for utilities of self or others. This can be simple or extend to issues of reciprocity and goodwill.

  40. A Social Agent • An agents that has to interact with people, other agent(s), where it is affected and can affect others’ cognitive states, emotions, and/or behavior via social actions, social conventions, a personal rationality. • Generally, such agents are more complex than reactive agents and must include social perception in their deliberation.

  41. A Social Agent • We cannot merely add social modules to prefabricated agents. Social makeup of such agents are found in all aspects of their architecture and must be designed from the start. • We must at least have access to an agent’s social model: <Cognition, Emotions, Behaviors, Social actions, Social Conventions, Personal Rationality>

  42. A Social Agent Socially intelligent agents are biological or artificial agents that show elements of (humanstyle) social intelligence. The term artificial social intelligence refers then to an instantiation of human-style social intelligence in artificial agents. (Dautehahn 1998)

  43. Social Inference Cognitive Emotions in communication Illocution in communication Observing Interpersonal Exchanges Goals and plans Gesture Body Language Capability Attitude Commonalities in goals and plans Inferred Attitudes and Relationships Social ties Psychological states Benevolence Dependence Inferred Social Import Trust Autonomy Power Coherence Norms Values Team Control Sub-cognitive

  44. Situatedness • Physically situatedness promotes frequent sampling of physical environment, feedback via physical environment… as in the Subsumption architecture • Socially situatedness promotes frequent sampling of environment (gossip), feedback via social interaction… to new agent architectures

  45. Levels of Sociality • There are many MAS or HAI problems that are deterministic and would not require social reasoning. I.e., agent’s actions would not depend on others and if so it is pre-determined. At best, sociality is a luxury. • There are scenarios where sociality, explicit reasoning about other agent’s or human actions are critical and it is not all predetermined. This requires high level of sociality.

  46. Social delegation • E.g., X gives Y permission and authority to make decisions for their organization • Social delegation differs from physical delegation in that agents will have a “cognitive” exchange in stead of a physical one. • Models of social delegation might be economic (utilitarian), dependency (in-debtedness), power-based (authority), or democratic.

  47. Dimensions for Developing a Social Agent Culture Social Environment Multi-Agent Emotions Social and collaborative notions Cultural shifts in institutions organizations Public skills Planning and learning abilities Modeling other agents … Tasks; Resources; Ontologies Adherence to norms, values, obligations, power, org rules… Communication and exchange Community Communication and exchange Awareness… Initiative, Autonomy, Power, Control, … Emergent Norms and roles Anthropomorphism Language realism Adaptation and changes in reasoning about basic social notions Collaboration: Trust, safety, flexible roles, policies, preferences Emotions Organization Human Team

  48. Dimensions for Developing a Social Agent Culture Social Environment Multi-Agent Asynchronous : Sit Aware : Real-time Communication Info sharing Coordination Community Organization Human Team

  49. Social Environment Agents that are embedded in social environments must be designed to account for the following needs: • Social tasks • Shared Resources • Ontologies • Public skills related to tasks and resources such as requesting and delegating

  50. Agents in Public Service • Interactions with the public beyond individuals • Public libraries • Museums • Shopping malls • Transportation stations • Billboards and road signs

More Related