1 / 25

How To Implement Social Policies. A Deliberative Agent Architecture

How To Implement Social Policies. A Deliberative Agent Architecture. Roberto Pedone Rosaria Conte IP-CNR Division "AI, Cognitive and Interaction Modelling" PSS (Project on Social Simulation) V.LE Marx 15, 00137 Roma. June 5-8, 2000. The Problem.

yoland
Télécharger la présentation

How To Implement Social Policies. A Deliberative Agent Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How To Implement Social Policies. A Deliberative Agent Architecture Roberto Pedone Rosaria Conte IP-CNR Division "AI, Cognitive and Interaction Modelling" PSS (Project on Social Simulation) V.LE Marx 15, 00137 Roma. June 5-8, 2000

  2. The Problem... • Multiple agents in common environments face problems posed • by a finite world (e.g., resource scarcity), and therefore • by social interference. • These are problems of social interdependence • Shared problems • Requiring a common solution = multi-agent plan (multiple actions for a unique goal) • Necessity of commonsolutions adopted by the majority. • How to obtain that interdependent but autonomous agents apply common solutions?

  3. Autonomous agents? • Self-sufficient: By definition, impossible! • Self-interested: what does it mean? • Selfish • Have own criteria to filter external inputs • Beliefs: they may not perceive/understand • Problem (cognitive biases) • As common (“) • Solution (“) • Complementarity (“) • Goals: they accept requests when these are useful for some of their goals • BUT: difference between interests and goals...

  4. How To Achieve Common Solutions Among Autonomous Agents? S Two approaches • Bottom-up: Emergent, spontaneous processes among adaptive agents • Top-down: Designed, incentives and sanctions modifying the preferences of rational agents • Acquisition of solutions • Enforcing mechanisms • Violation S pa>pb pa>pb pa>pb

  5. BUT... • Evolutionary processes and Adaptive Agents: socially acceptable outcomes are doubtful! • Rational agents require unrealistic conditions (specified, severe and certain sanctions).

  6. Bottom UP. Adaptive Agents • Analogy between biological evolution and social processes • Fitter individuals survive • Social processes spread and evolve through imitation of fitter agents. • But • how can agents perceive the fitness of a strategy? “How do individuals get information about average fitness, or even observe the fitnesses of other individuals?” (Chattoe, 1998) • How to make sure that what propagates is what is socially desirable, without the intervention of some deus ex machina (the programmer)?

  7. Top Down. Rational Agents • Socially desirable effectsare deliberately pursued through incentives and sanctions. • Incentives and sanctions induce rational agents to act according to global interest. • Rational agents take decisions by calculating the subjective expected value of actions according to their utility function. • A rational decider will comply with the norm if utility of incentive (or of avoiding sanction) is higher than utility of transgression.

  8. Effects • Rational deciders will violate a norm ni as soon as one or more of the following conditions applies: • Sanctions are not imposed • Sanctions are not specified • The sanction for violating ni is lower than the value of transgression with equal probability of application of the sanction (1/2). • The sanction for violating an incompatible norm is higher. • The sanction for violating the norm is not or rarely applied. • Fast decline in norm compliance is likely to follow from any of the above conditions. • Then, what?

  9. Top-Down and Bottom-UP S She believes S Should I? Do you know that S? • Top down • agents acquire S • agents decide to accept S • have S even if they do not apply it • Bottom up • infer S from others • communicate S to one another • control each other. • Difference from previous approaches: • S is represented as a specific object in the mind • Can travel from one mind to the other. These guys believe S. Will they act accordingly?

  10. But How To Tell S? This guy is crazy, better to do as he likes! • Sanction • may be sufficient (but unnecessary) for acceptance • but it is insufficient (and unnecessary) for recognition. Shut up, otherwise I knock you down! Shut up, otherwise I knock you down!

  11. Believed Obligations Shut up! NO! You ought to! • May be insufficient for acceptance • But necessary & sufficient for recognition! I know, but I don’t care!

  12. This requires deliberative cognitive • This requires a cognitive deliberative agent: • To communicate, infer, control: meta-level representation (beliefs about representations of S) • To decide whether to accept/reject a believed: meta-meta-level representation (decision about belief about representation of S). When the content of representation is a Norm, we have Deliberative Normative Agents. BEL(S) GOAL(S) BEL(GOAL(S)) S

  13. Deliberative Normative Agents • Are able to recognize the existence of norms • Can decide to adopt a norm for different reasons (different meta-goals) • Can deliberately follow the norm • or violate it in specific cases • Can react to violations of the norm by other agents Require more realistic conditions!

  14. Deliberative Normative Agents • Can reason about norms • Can communicate norms • Can negotiate about norms This implies • Have norms as mental objects • Hve different levels of representations and links among them

  15. A Deliberative Agent ArchitectureWith Norms (Castelfranchi et al., 1999) DESIRE I (Brazier et al., 1999)

  16. DESIRE IIMain Components Coomunication • Agent Interaction Management, Perception • World Interaction Management, • Maintenance of Agent Information = agent models • Maintenance of World Information = world models • Maintenance of Society Information = social world models • Own Process Control = mental states & processing: goal generation & dynamics, belief acceptance, decision-making.

  17. DESIRE III

  18. DESIRE IVOwn Process Control

  19. DESIRE VOwn Process Control Components information • Norm Management (Norms Belief) • Strategy Management (Candidate Goals) • Goal Management (Selected Goals) • Plan Management (Plan Chosen) (Action)

  20. DESIRE VI:Norms and goals • non-adopted norms: • useful for coordination (predict the behaviour of the other agents) • adopted norms: • impact on goal generation; among the possible ‘sources of goals’ -> normative goals • impact on goal selection by providing criteria about how to select among existing goals; e.g., preference criteria.

  21. DESIRE VII:Norms and plans • Norms may generate plans • Norms may select plans • Norms may select actions E.g. the norm “be kind to colleagues” may lead to a preferred plan to reach a goal within an organisation.

  22. To sum up • Adaptive agents: fit = socially acceptable? • Rational agents are good enough if sanctions are severe, effective, and certain. Otherwise, collapse in compliance...

  23. With Deliberative Agents • Acquisition of norms online • Communication and negotiation (social monitoring and control) • Flexibility: • agents follow the norm whenever possible • agents violate the norm (sometimes) • agents violate the norm always if possible • But graceful degradation with uncertain sanctions!

  24. Work in Progress DESIRE is used for simulation-based experiments on the role of deliberative agents in distributed social control • Market with heterogeneous, interdependent agents • Make contracts in the morning (resource exchange under survival pressure) • Deliver in the afternoon (norm of reciprocity) • Violations can be found out and • The news spread through the group • The event denounced • Both. • Objectives: • What are the effects of violation? (Under given enviornmental conditions, norms may be non adaptive…) • When and why agents react to violation? • What are the effects of reaction?

  25. What To Do Next • DESIRE is complex • Computationally: too few agents. Can simpler languages implement meta-level representations? • Mentally: too much deliberation • Emotional/affective enforcement? NB -> E (moral sense, guilt) -> NG -> NA. • Emotional shortcut others’ evaluations -> SE (shame) -> NA implicit norms implicit n-goals. • Affective computing! But integrated with meta-level representations.

More Related