1 / 1

Annual Conference of ITA ACITA 2009

Annual Conference of ITA ACITA 2009. Agent Assistance in Forming Swift Trust in Ad-Hoc Decision-Making Teams. Katia Sycara katia+@cs.cmu.edu. Chris Burnett cburnett@abdn.ac.uk. Timothy J. Norman t.j.norman@abdn.ac.uk. Problem

tybalt
Télécharger la présentation

Annual Conference of ITA ACITA 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Annual Conference of ITA ACITA 2009 Agent Assistance in Forming Swift Trust in Ad-Hoc Decision-Making Teams Katia Sycara katia+@cs.cmu.edu Chris Burnett cburnett@abdn.ac.uk Timothy J. Norman t.j.norman@abdn.ac.uk • Problem • Modern coalition operations frequently require the integration and collaboration of highly diverse forces, often with very limited experience working together • Within such ad-hoc teams, members must be able to delegate tasks to, and subsequently rely upon, each other, requiring trust • Such teams are characterized by unfamiliarity, diversity, rapid formation and short life-span, all are barriers to trust formation [4] • Approach • We are interested in how intelligent agents may support this process by automatically collecting and integrating data to support trust evaluations for human decision makers, specifically in ad-hoc team environments • Traditional multi-agent trust approaches rely on direct and reputational experiences lacking in ad-hoc team environments • Categorical Assumptions and Monitoring can provide evidence to a trust model in the absence of experiential evidence [3] • Model • Agents require a cognitive model of trust which integrates available evidence to produce trust beliefs [1] (Figure 1) • Subjective Logic [2]provides our underlying subjective belief representation • When both direct and reputational evidence is insufficient, categorical assumptions and monitoring provide agents with additional evidence • These support tentative decisions which provide stronger direct experiences • Monitoring • Delegation in ad-hoc teams does not rely on trust alone. When trust is insufficient, a mixture of trust and monitoring behaviors is used • With monitoring, trust is only required for the elements of a task for which it is lacking • However, monitoring will incur costs on both parties in a delegation relationship, hampering effectiveness • Monitoring should provide rapid feedback of evidence to the trust model, and be reduced as trust increases Fig. 1: Trust model overview • Trust Dimensions • Competence – Does the candidate have the ability to undertake task T? • Disposition – Will the candidate behave the way I expect? • Normative Consistency – can the candidate be trusted to observe norms? • Normative Conflict – will the candidate encounter normative conflicts? • Conflict Resolution – what are the candidates priorities over norms? • These distinctions affect monitoring and intervention strategies • Monitoring Types • Passive • Trustor (A) observes trustee (B) • No communication required • Violations must be inferred from observation alone – costly for A • Reactive • A requests reports from B at A’s discretion – B cannot anticipate monitoring requests • Proactive • B sends reports to A at B’s discretion - A must trust B to honestly and accurately report • Scheduled • Monitoring communication occurs at predefined intervals • Frequency crucial in determining effectiveness • Both agents can anticipate monitoring activity • Categorical Trust • Humans deal with lack of evidence by importing categorical information from previous collaborative settings. This is done by stereotyping; generalizing from individuals to types of agents and their trustworthiness. • Agents can engage in such behavior by learningrelationships between features of collaboration partners and expected performance. • This allows for an agent to form a tentativetrust evaluation even when there are no direct or reputational evidence sources for aparticular candidate Fig. 2: Monitoring References [1] R. Falcone and C. Castelfranchi. Social trust: a cognitive approach. Trust and Deception in Virtual Societies, pages 55–90, 2001. [2] A. Jøsang, R. Hayward, and S. Pope. Trust network analysis with subjective logic. In Proceedings of the 29th Australasian Computer Science Conference-Volume 48, pages 85–94. Australian Computer Society, Inc. Darlinghurst, Australia, Australia, 2006. [3] D. Meyerson, K. Weick, and R. Kramer. Swift trust and temporary groups. Trust in Organizations: Frontiers of Theory and Research, 195, 1996. [4] R. Pascual, M. Mills, and C. Blendell. Supporting distributed and ad-hoc team interaction. In People In Control: An International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres, 1999., pages 64–71, 1999

More Related